text
stringlengths
1
2.55M
id
stringlengths
21
25
metadata
dict
\section{Proof of Theorem 1} \begin{proof} The definition of $h$ can be expanded to \begin{equation*} h(\vec x) = \vec w^T f(\vec x) + \vec w^T \vec z + b, \;\; \vec z \sim \mathcal{N}(0, \Sigma), \end{equation*} and be reinterpreted as \begin{equation*} h(\vec x) \sim \mathcal{N}(\vec w^T f(\vec x) + b, \vec w^T \Sigma \vec w). \end{equation*} Going further, we can see that the distribution of the margin function is \begin{equation*} m_h(\vec x, y) \sim \mathcal{N}(y(\vec w^T f(\vec x) + b), \vec w^T \Sigma \vec w), \end{equation*} for which the probability of being less than zero is given by the cumulative distribution function for the normal distribution, \begin{equation} \label{eq:theory_clean} P(m_h(\vec x, y) < 0) = \Phi \Bigg ( \frac{-y(\vec w^T f(\vec x) + b)}{\sqrt{\vec w^T \Sigma \vec w}} \Bigg ). \end{equation} From the increasing monotonicity of $\Phi$, we also have that \begin{align*} \max_{\vec \delta : \|\vec \delta\|_p \leq \epsilon} & \Phi \Bigg ( \frac{-y(\vec w^T f(\vec x + \delta) + b)}{\sqrt{\vec w^T \Sigma \vec w}} \Bigg ) \\ =& \Phi \Bigg ( \frac{\max_{\vec \delta : \|\vec \delta\|_p \leq \epsilon} -y(\vec w^T f(\vec x + \delta) + b)}{\sqrt{\vec w^T \Sigma \vec w}} \Bigg ). \end{align*} Suppose the adversarial perturbation, $\delta$, causes the output of the non-stochastic version of $h$ to change by a magnitude of $\Delta_p^{\Tilde{h}}(\vec x, \epsilon)$. There are a number of ways, such as local Lipschitz constants \cite{tsuzuku2018lipschitz,gouk2020sspd}, that can be used to bound the quantity for simple networks. Substituting $\Delta_p^{\Tilde{h}}$ into the previous equation yields \begin{equation} \label{eq:theory_adv} \begin{split} \max_{\vec \delta : \|\vec \delta\|_p \leq \epsilon} & P(m_h(\vec x + \delta, y) \leq 0) \\ \leq & \Phi \Bigg ( \frac{-y(\vec w^T f(\vec x) + b) + \Delta_p^{\Tilde{h}}(\vec x, \epsilon)}{\sqrt{\vec w^T \Sigma \vec w}} \Bigg ). \end{split} \end{equation} Finally, we know that the difference in probabilities of misclassification when the model is and is not under adversarial attack $\delta$, is given by \begin{equation} \label{eq:theory_adv_gap_2} \begin{split} G_{p,\epsilon}^h(\vec x, y) = \max_{\vec \delta : \|\vec \delta\|_p \leq \epsilon} P(m_h(\vec x + \delta, y) \leq 0)\\ - P(m_h(\vec x, y) \leq 0). \end{split} \end{equation} Combining Equations~\ref{eq:theory_clean} and~\ref{eq:theory_adv} with Equation~\ref{eq:theory_adv_gap_2} results in \begin{equation*} \begin{split} G(\vec x, y) \leq \Phi \Bigg ( \frac{-y(\vec w^T f(\vec x) + b) + \Delta_p^{\Tilde{h}}(\vec x, \epsilon)}{\sqrt{\vec w^T \Sigma \vec w}} \Bigg ) \\- \Phi \Bigg ( \frac{-y(\vec w^T f(\vec x) + b)}{\sqrt{\vec w^T \Sigma \vec w}} \Bigg ). \end{split} \end{equation*} Because the Lipschitz constant of $\Phi$ is $\frac{1}{\sqrt{2 \pi}}$, we can further bound $G$ by \begin{equation*} \label{eq:theory_bound} G(\vec x, y) \leq \frac{\Delta_p^{\Tilde{h}}(\vec x, \epsilon)}{\sqrt{2 \pi \vec w^T \Sigma \vec w}}. \end{equation*} \end{proof} \section{Hyperparameters of Experiments} In Table~\ref{tab:hyperparams}, we provide the hyperparameter setup for all the experiments in our ablation study. Note that we use the same values for both the isotropic and anisotropic variants of our model within the same benchmark. We further clarify that we use a batch size of 128 across all experiments. To choose these values, we split the training data into a training and a validation set and performed grid search. The grid consisted of negative powers of 10 \{$\mathrm{10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}}$\} for both hyperparameters. \begin{table}[t] \caption{Values for learning rate and weight decay for all experiments in our ablation study.} \centering \begin{tabular}{lcc} \toprule Benchmark & Learning rate & Weight decay \\ \midrule CIFAR-10 & $10^{-2}$ & $10^{-4}$ \\ CIFAR-100 & $10^{-2}$ & $10^{-4}$ \\ SVHN & $10^{-2}$ & $10^{-4}$ \\ FMNIST & $10^{-4}$ & $10^{-4}$ \\ \bottomrule \end{tabular} \label{tab:hyperparams} \end{table} \section{Larger Architectures} In the main body of the paper we explore how our method scales with the size of the backbone's architecture by experimenting with LeNet++ (small, 60 thousand parameters) and ResNet-18 (medium, 11 million parameters). In Table~\ref{tab:wrn-34-10} we also provide some experimental results on CIFAR-10 with the much larger Wide-ResNet-34-10 architecture (46 million parameters) \begin{table}[t] \caption{PGD test scores on CIFAR-10 using WRN-34-10, for different values of attack strength $\epsilon$.} \centering \resizebox{1.0\columnwidth}{!} { \begin{tabular}{lcccccccccc} \toprule PGD($\epsilon/255$) & Clean & 1 & 2 & 4 & 8 & 16 & 32 & 64 & 128 \\ \midrule No Defense & 0.97 & 0.63 & 0.60 & 0.26 & 0.12 & 0 & 0 & 0 & 0 \\ WCA-Net & 0.97 & 0.80 & 0.80 & 0.77 & 0.73 & 0.70 & 0.34 & 0.10 & 0 \\ \bottomrule \end{tabular} \label{tab:wrn-34-10} } \end{table} \section{Enforcing Norm Constraints} In Section~\ref{sec:wca} we elaborate on how we use an $\ell^2$ penalty to prevent the magnitude of the classifier vectors $\vec{w}$ and covariance matrix $\Sigma$ from increasing uncontrollably. Another approach for controlling the magnitude of the parameters, is enforcing norm constraints after each gradient descent update, using a projected subgradient method. The projected subgradient method changes the standard update rule of the subgradient method from \begin{equation*} \vec \theta^{(t+1)} \gets \vec \theta^{(t)} - \alpha \nabla_{\vec \theta} \mathcal{L}(\vec \theta^{(t)}), \end{equation*} to \begin{align*} \vec u^{(t)} &\gets \vec \theta^{(t)} - \alpha \nabla_{\vec \theta} \mathcal{L}(\vec \theta^{(t)}) \\ \vec \theta^{(t+1)} &\gets \underset{\vec v \in \Omega}{\text{arg}\min} \, \|\vec v - \vec u^{(t)}\|_2^2, \end{align*} where $\Omega$ is known as the feasible set. In our case there are three sets of parameters: the feature extractor weights, the linear classifier weights, and the covariance matrix. No projection needs to be applied to the extractor weights, as they are unconstrained. The linear classifier weights have an $\ell^2$ constraint on the vector associated with each class, so their feasible set it an $\ell^2$ ball---there is a known closed form projection onto the $\ell^2$ ball (see, e.g., \citet{gouk2021iclr}). The feasible set for the covariance matrix is the set of positive semi-definite matrices with bounded singular values. This constraint can be enforced by performing a singular value decomposition on the updated covariance matrix, clipping the values to the appropriate threshold, and reconstructing the new projected covariance matrix~\citep{lefkimmiatis2013hessian}. The final algorithm is given by \begin{align} Y^{(t)} &\gets \vec \Sigma^{(t)} - \alpha \nabla_{\Sigma} \mathcal{L}(\vec \phi^{(t)}, \vec w^{(t)}, L^{(t)}) \nonumber \\ \vec u^{(t)}_i &\gets \vec w_i - \alpha \nabla_{\vec w_i} \mathcal{L}(\vec \phi^{(t)}, \vec w^{(t)}, L^t) \nonumber \\ \vec \phi^{(t+1)} &\gets \vec \phi^{(t)} - \alpha \nabla_{\vec \phi} \mathcal{L}(\vec \phi^{(t)}, \vec w^{(t)}, L^t) \nonumber \\ \vec w_i^{(t+1)} &\gets \frac{1}{\text{max}(1, \frac{\|\vec u_i^{(t)}\|_2}{\gamma})} \vec u_i^{(t)} \nonumber \\ U^{(t)} S^{(t)} V^{(t)} &\gets Y^{(t)T}Y^{(t)} \label{eq:svd-step} \\ \Sigma^{(t)} &\gets U^{(t)} \tilde{S}^{(t)} V^{(t)} \nonumber \\ L^{(t+1)T}L^{(t+1)} &\gets \Sigma^{(t)}, \label{eq:cholesky-step} \end{align} where (\ref{eq:svd-step}) is performing a singular value decomposition, $\tilde{S}$ represents the clipped version of $S$, and (\ref{eq:cholesky-step}) is computing the Cholesky decomposition. \section{Source Code and Reproducibility} The source code is openly available on GitHub: \url{https://github.com/peustr/WCA-net}. \section{Conclusions} \label{sec:conclusions} In this paper we contribute the first stochastic model for adversarial defense that features fully-trained, anisotropic Gaussian noise, is hyperparameter free, and does not rely on adversarial training. We provide both theoretical support for the core ideas behind it, and experimental evidence of its excelling performance. We extensively evaluate WCA-Net{} on a variety of white-box and black-box attacks, and further show that its high performance is not a result of stochastic (obfuscated) gradients. Thus, we consider the proposed model to push the boundary of adversarial robustness. \section{Experiments} \label{sec:experiments} In this Section we present the experiments that demonstrate the efficacy of our model and verify our theoretical analysis. \subsection{Experimental Setup} \myparagraph{Datasets} For comparison against the current state-of-the-art and for our ablation study we use four benchmarks: CIFAR-10, CIFAR-100~\cite{krizhevsky2009learning}, SVHN~\cite{netzer2011reading} and Fashion-MNIST~\cite{corr17fmnist}. CIFAR-10 and CIFAR-100 contain 60K 32x32 color images, 50K for training and 10K for testing, evenly spread across 10 and 100 classes respectively. SVHN can be considered a more challenging version of MNIST~\cite{lecun2010mnist}; it contains almost 100K 32x32 color images of digits (0-9) collected from Google's Street View imagery, with roughly 73K for training and 26K for testing. Fashion-MNIST is a collection of 70K 28x28 grayscale images of clothing, 60K for training and 10K for testing, also spread across 10 classes. \myparagraph{Models} For all benchmarks except F-MNIST we use a ResNet-18~\cite{cvpr16resnet} backbone, while for F-MNIST, being a relatively simpler dataset, we use LeNet++~\cite{eccv16lenet}. After the backbone we add a penultimate layer for dimensionality reduction; this enables us to always train a reasonably-sized covariance matrix regardless of the original dimensionality of the feature extractor\footnote{32x32 for the benchmarks with 10 classes, 256x256 for the benchmarks with 100 classes.}. The only restriction for the dimensionality of the penultimate layer is that it needs to be a number greater or equal to the number of classes in the task, so as to allow the covariance matrix to align with at least one classifier vector. The two hyperparameters of note across all of our experiments are the learning rate and $\ell^2$ penalty (i.e., weight decay), the exact values of which are provided in the supplementary material. \subsubsection{Attacks} We evaluate our method using three white-box adversaries: FGSM~\cite{iclr15fgsm}, PGD~\cite{iclr18pgd} and C\&W~\cite{sp17cnw}, and one black-box attack: the One-Pixel attack~\cite{ec19onepixel}. We parameterize the attacks following the literature~\cite{iccv19pni, cvpr20learn2perturb}. More specifically, FGSM and PGD are set with an attack strength of $\epsilon=8/255$ for CIFAR-10, CIFAR-100 and SVHN, and $\epsilon=0.3$ for F-MNIST. PGD has a step size of $\alpha=\epsilon/10$ and number of steps $k=10$ for all benchmarks as per~\citet{iccv19pni}. C\&W has a learning rate of $\alpha=5\cdot10^{-4}$, number of iterations $k=1000$, initial constant $c=10^{-3}$ and maximum binary steps $b_{\text{max}} = 9$ same as~\citet{cvpr20learn2perturb}. For the parameters of the One-Pixel attack we tried to replicate the experimental setup described in the supplementary material of~\citet{cvpr20learn2perturb} for attack strengths of 1, 2 and 3 pixels. We followed their setup with population size $\mathrm{N=400}$ and maximum number of iterations $\mathrm{k_{max}=75}$. However, we noticed that the more pixels we added to our attack the weaker the attack became, which is counter-intuitive. We attribute that to the small number of iterations; every added pixel substantially increases the search space of the differential evolution algorithm, and 75 iterations are no longer enough to converge when the number of pixels is 2 and 3. Therefore we maintain a population size of $\mathrm{N=400}$, but increase the number of iterations to $\mathrm{k_{max}=1000}$. For reproducibility purposes, we further clarify that for the differential evolution algorithm we use a crossover probability of $\mathrm{r=0.7}$, a mutation constant of $\mathrm{m=0.5}$, and the following criterion for convergence: \begin{equation*} \sqrt{\text{Var}(\mathcal{E}(X))} \leq \Big | \frac{1}{100N}\sum_{x \in X} \mathcal{E}(x) \Big |, \end{equation*} where $X$ denotes the population, $\mathcal{E}(X)$ the energy of the population and $\mathcal{E}(x)$ the energy of a single sample. \myparagraph{Expectation over Transformation} Due to the noise injected by SNNs, the gradients used by white-box attacks are stochastic \cite{pmlr18obfuscated}. As a result, the true gradients cannot be correctly estimated for attacks that use only one sample to compute the perturbation. To avoid this issue, we apply Expectation over Transformation (EoT) following~\citet{pmlr18obfuscated}. When generating an attack, we compute gradients of multiple forward passes using Monte-Carlo sampling and perturb the inputs using the averaged gradient at each update. We empirically found that a reliable number of MC samples is 50 (as we observed performance begins to saturate from around 35 and converges at 40); thus, we use 50 across all experiments. \subsection{Comparison to Prior Stochastic Defenses} \myparagraph{Competitors} We compare the performance of WCA-Net{} to three recent state-of-the-art stochastic defenses to verify its efficacy. \textbf{AdvBNN}~\cite{iclr19advbnn}: adversarially trains a Bayesian neural network for defense. \textbf{PNI}~\cite{iccv19pni}: learns an ``intensity'' parameter to control the variance of their SNN. \textbf{Learn2Perturb (L2P)}~\cite{cvpr20learn2perturb}: improves PNI by learning an isotropic perturbation injection module. Furthermore, there are partial comparisons against \textbf{SE-SNN}~\cite{aaai2021sesnn} and \textbf{IAAT}~\cite{cvpr19denoising}. All experiments use a ResNet-18 backbone and are conducted on CIFAR-10 for fair comparison. \begin{table}[t] \caption{Comparison of state-of-the-art SNNs for FGSM and PGD attacks on CIFAR-10 and CIFAR-100 with a ResNet-18 backbone. Performance of Adv-BNN, PNI and L2P extracted from~\citet{cvpr20learn2perturb}.} \centering \vskip 0.15in \resizebox{1.\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{3}{c}{CIFAR-100} \\ Method & Clean & FGSM & PGD & Clean & FGSM & PGD \\ \midrule Adv-BNN & 82.2 & 60.0 & 53.6 & $\sim$ 58.0 & $\sim$ 30.0 & $\sim$ 27.0 \\ PNI & 87.2 & 58.1 & 49.4 & $\sim$ 61.0 & $\sim$ 27.0 & $\sim$ 22.0 \\ L2P & 85.3 & 62.4 & 56.1 & $\sim$ 50.0 & $\sim$ 30.0 & $\sim$ 26.0 \\ SE-SNN & 92.3 & 74.3 & - & - & - & - \\ IAAT & - & - & - & 63.9 & - & 18.5 \\ WCA-Net{} & \textbf{93.2} & \textbf{77.6} & \textbf{71.4} & \textbf{70.1} & \textbf{51.5} & \textbf{42.7} \\ \bottomrule \end{tabular} } \label{tab:sota_compare_fgsm_pgd_cifar} \end{table} \begin{table}[t] \caption{Comparison of state-of-the-art SNNs for white box C\&W attack and black box n-Pixel attack on CIFAR-10 with a ResNet-18 backbone. Performance of competing methods extracted from~\citet{cvpr20learn2perturb}.} \centering \vskip 0.15in \resizebox{1.\linewidth}{!}{ \begin{tabular}{llcccc} \toprule & Attack Strength & Adv-BNN & PNI & L2P & WCA-Net{} \\ \midrule & Clean & 82.2 & 87.2 & 85.3 & \textbf{93.2} \\ \midrule \multirow{4}{*}{\rotatebox[origin=c]{90}{C\&W}} & $\kappa=0.1$ & 78.1 & 66.1 & 84.0 & \textbf{89.4} \\ & $\kappa=1$ & 65.1 & 34.0 & 76.4 & \textbf{78.4} \\ & $\kappa=2$ & 49.1 & 16.0 & 66.5 & \textbf{71.9} \\ & $\kappa=5$ & 16.0 & 0.08 & 34.8 & \textbf{55.0} \\ \midrule \multirow{4}{*}{\rotatebox[origin=c]{90}{n-Pixel}} & 1 pixel & 68.6 & 50.9 & 64.5 & \textbf{90.8} \\ & 2 pixels & 64.6 & 39.0 & 60.1 & \textbf{85.5} \\ & 3 pixels & 59.7 & 35.4 & 53.9 & \textbf{81.2} \\ & 5 pixels & - & - & - & 64.3 \\ \bottomrule \end{tabular} } \label{tab:sota_compare_cw_1px} \end{table} \begin{table}[t] \caption{Comparison of WCA-Net{} to recent state-of-the-art, both stochastic and non-stochastic, on CIFAR-10. All competitors evaluate their models on the untargeted PGD attack, with attack strength $\epsilon=8/255$, and number of iterations $k \in \{7, 10, 20\}$. Some results are extracted from \citet{iccv19pni}. AT: Use of adversarial training.} \centering \vskip 0.15in \resizebox{1.\linewidth}{!}{ \begin{tabular}{lllcc} \toprule Defense & Architecture & AT & Clean & PGD \\ \midrule RSE \cite{eccv18rse} & ResNext & \xmark & 87.5 & 40.0 \\ DP \cite{sp19dp} & 28-10 Wide ResNet & \xmark & 87.0 & 25.0 \\ TRADES \cite{icml19trades} & ResNet-18 & \checkmark & 84.9 & 56.6 \\ PCL \cite{iccv19pcl} & ResNet-110 & \checkmark & 91.9 & 46.7 \\ PNI \cite{iccv19pni} & ResNet-20 (4x) & \checkmark & 87.7 & 49.1 \\ Adv-BNN \cite{iclr19advbnn} & VGG-16 & \checkmark & 77.2 & 54.6 \\ L2P \cite{cvpr20learn2perturb} & ResNet-18 & \checkmark & 85.3 & 56.3 \\ MART \cite{iclr20mart} & ResNet-18 & \checkmark & 83.0 & 55.5 \\ BPFC \cite{cvpr20bpfc} & ResNet-18 & \xmark & 82.4 & 41.7 \\ RLFLAT \cite{iclr20rlflat} & 32-10 Wide ResNet & \checkmark & 82.7 & 58.7 \\ MI \cite{iclr20mi} & ResNet-50 & \xmark & 84.2 & 64.5 \\ SADS \cite{cvpr20sads} & 28-10 Wide ResNet & \checkmark & 82.0 & 45.6 \\ \midrule WCA-Net{} & ResNet-18 & \xmark & \textbf{93.2} & \textbf{71.4} \\ \bottomrule \end{tabular} } \label{tab:sota_compare_other} \end{table} \subsubsection{White-box Attacks} We first compare our proposed WCA-Net{} to the existing state-of-the-art methods in the white-box attack setting. From the results in Table~\ref{tab:sota_compare_fgsm_pgd_cifar}, we can see that our~WCA-Net{} shows noticeable improvement of $\sim 15\%$ over the strongest competitor, L2P. Moreover, we find that our method does not sacrifice its performance on clean data to afford such strong robustness. An important aspect of WCA that needs to be assessed is its potential to scale with the number of classes. For this reason we conduct experiments on CIFAR-100, comparing against our previously mentioned competitors, plus IAAT~\cite{cvpr19denoising}, all of which use a ResNet-18 backbone in their architectures. From Table~\ref{tab:sota_compare_fgsm_pgd_cifar} we can see that the adversarial robustness of WCA-Net{} outperforms the other methods. We also present the evaluation of our method against the C\&W attack in Table~\ref{tab:sota_compare_cw_1px}. Here, the confidence level $\kappa$ indicates the attack strength. Our WCA-Net{} achieves the best performance, with the accuracy degrading gracefully as the confidence increases. \subsubsection{Black-box Attacks} To further verify the robustness of our WCA-Net{}, we conduct experiments on a black-box attack, the One-Pixel attack~\cite{ec19onepixel}. This attack is derivative-free and relies on evolutionary optimization, and its attack strength is controlled by the number of pixels it compromises. We follow~\citet{cvpr20learn2perturb} and consider pixel numbers in $\{1,2,3\}$. Additionally, we report results for a stronger 5-pixel attack. From Table~\ref{tab:sota_compare_cw_1px}, we can see that our method demonstrates the strongest robustness in all cases, showing $\sim 13\%$ to $\sim 22\%$ improvement over the best competitor Adv-BNN. Importantly, these results show that the robustness of our method does not rely on stochastic gradients. \subsubsection{Stronger Attacks} In addition, we evaluate WCA-Net{} against two stronger attacks that are, in general, common among recent adversarial robustness literature, but are not mentioned in the stochastic defenses we outline as direct competitors. These are: (i) PGD$_{100}$; a stronger variant of PGD with 100 random restarts and (ii) the Square Attack~\cite{eccv20square}; a black-box attack that compromises the attacked image in small localized square-shaped updates. We present the results of our evaluation in Table \ref{tab:stronger_attacks}. \begin{table}[t] \caption{Evaluation of WCA-Net{} with a ResNet-18 backbone on CIFAR-10, against the white-box PGD$_{100}$ and black-box Square Attack, for different values of attack strength $\epsilon$.} \centering \resizebox{1.0\columnwidth}{!} { \begin{tabular}{clccccccccc} \toprule & $\epsilon/255$ & Clean & 1 & 2 & 4 & 8 & 16 & 32 & 64 & 128 \\ \midrule \multirow{2}{*}{\rotatebox[origin=c]{90}{\scriptsize PGD$_{100}$}} & No Def. & 93.3 & 45.3 & 14.6 & 0 & 0 & 0 & 0 & 0 & 0 \\[1pt] & WCA & 93.2 & 73.2 & 72.2 & 72.1 & 71.2 & 69.7 & 56.4 & 28.2 & 10.5 \\[1pt] \midrule \multirow{2}{*}{\rotatebox[origin=c]{90}{\scriptsize Square}} & No Def. & 93.3 & 32.9 & 31.7 & 12.4 & 6.0 & 1.2 & 0 & 0 & 0 \\[1pt] & WCA & 93.2 & 51.7 & 51.7 & 50.4 & 49.0 & 48.8 & 44.3 & 36.9 & 28.6 \\[1pt] \bottomrule \end{tabular} \label{tab:stronger_attacks} } \end{table} \subsection{Comparison to State of the Art} Direct comparison to a wider range of competitors is difficult due to the variety of backbones and settings used. Nevertheless, Table~\ref{tab:sota_compare_other} provides comparison to recent state of the art stochastic and non-stochastic defenses. We can see that WCA-Net{} achieves excellent performance including comparing to methods that use bigger backbones and make the stronger assumption of adversarial training. \begin{table*}[t] \caption{Ablation study for FGSM and PGD attacks on CIFAR-10, CIFAR-100, SVHN and F-MNIST. For CIFAR-10, CIFAR-100 and SVHN we use a ResNet-18, and for F-MNIST a LeNet++ backbone.} \centering \vskip 0.15in \resizebox{1.\linewidth}{!}{ \begin{tabular}{lcccccccccccc} \toprule & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{3}{c}{CIFAR-100} & \multicolumn{3}{c}{SVHN} & \multicolumn{3}{c}{F-MNIST} \\ \midrule Model & Clean & FGSM & PGD & Clean & FGSM & PGD & Clean & FGSM & PGD & Clean & FGSM & PGD \\ \midrule No Defense & 93.3 & 14.9 & 3.9 & 72.2 & 12.3 & 1.2 & 93.4 & 55.6 & 23.5 & 90.8 & 26.4 & 12.0 \\ WCA-Net{} Isotropic & 93.1 & 60.7 & 55.9 & 70.1 & 27.5 & 21.8 & 93.4 & 45.0 & 40.1 & 90.1 & 63.5 & 37.2 \\ WCA-Net{} Anisotropic & 93.2 & 77.6 & 71.4 & 70.1 & 51.5 & 42.7 & 93.4 & 87.6 & 85.7 & 90.1 & 65.2 & 48.5 \\ \bottomrule \end{tabular} } \label{tab:comparison_isotropic_anisotropic} \end{table*} \begin{table}[t] \caption{Control experiments on CIFAR-10 for further analysis. See Sec.~\ref{sec:further_analysis}. AT: Training purely with adversarial examples. CT+AT: Training with a mix of clean and adversarial examples.} \centering \vskip 0.15in \resizebox{1.\linewidth}{!}{ \begin{tabular}{lccc} \toprule Experiment & Clean & FGSM & PGD \\ \midrule No Defense & 93.3 & 14.9 & 3.9 \\ WCA-Net{} (Penalty regularizer) & 93.2 & 77.6 & 71.4 \\ WCA-Net{} (Constraint regularizer) & 92.2 & 62.9 & 53.2 \\ \midrule E1: Test without EoT & 93.2 & 82.9 & 75.1 \\ E2: Average multiple noise samples & 93.2 & 70.3 & 68.8 \\ E3: Noise trained independently & 93.1 & 45.0 & 41.6 \\ \midrule WCA-Net{}: AT & 88.1 & 75.4 & 70.4 \\ WCA-Net{}: CT+AT & 90.0 & 75.6 & 70.7 \\ \bottomrule \end{tabular} } \label{tab:control_experiments} \end{table} \begin{table}[t] \caption{Comparison between the undefended ResNet-18 baseline and WCA-Net{} with a ResNet-18 backbone for Imagenette (high-res, 10 categories) and mini-ImageNet (large-scale, 100 categories) under PGD attack.} \centering \vskip 0.15in \resizebox{1.\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule & \multicolumn{3}{c}{Imagenette} & \multicolumn{3}{c}{mini-ImageNet} \\ Model & Clean & FGSM & PGD & Clean & FGSM & PGD \\ \midrule No Defense & 75.5 & 8.4 & 0 & 51.9 & 5.0 & 0 \\ WCA-Net{} & 74.2 & 59.3 & 48.7 & 51.3 & 41.6 & 30.4 \\ \bottomrule \end{tabular} } \label{tab:imagenet_experiments} \end{table} \subsection{Further Analysis} \label{sec:further_analysis} \myparagraph{Ablation Study} We perform an ablation study on four benchmarks, CIFAR-10, CIFAR-100, SVHN and F-MNIST, to investigate the contribution of anisotropic noise, as shown in Table~\ref{tab:comparison_isotropic_anisotropic}. For each benchmark, we evaluate a ``clean'' baseline architecture, consisting only of the backbone and the classification layer. We then evaluate a variant of WCA-Net{} with isotropic, and one with anisotropic noise. We observe that our anisotropic noise provides consistent benefit to adversarial robustness. Another important observation is that there is no trade-off between the robust and clean performance of our models; both the isotropic and anisotropic variants of WCA-Net{} maintain the clean performance of the baseline defenseless model. All the FGSM and PGD attacks in Table~\ref{tab:comparison_isotropic_anisotropic} use attack strength $\epsilon=8/255$. For completeness, we report the performance of all the variants above against FGSM and PGD with various attack strengths $\epsilon=2^n$, $n \in \{0...7\}$ on CIFAR-10 shown in Figure~\ref{fig:ablation_epsilon}. From these results, we can see the overall trend here is consistent with the observations in Table~\ref{tab:comparison_isotropic_anisotropic}. Also, we can see that the performance of our variants degrades more gracefully than the defenseless baseline. \myparagraph{Large-scale, high-resolution} We are further interested to show that our WCA-Net{} can handle high-resolution images and more challenging datasets. For that purpose we evaluate our method on two additional benchmarks: (i) Imagenette\footnote{\url{https://github.com/fastai/imagenette}}, a subset of ImageNet with 10 classes and full-resolution images, and (ii) mini-ImageNet~\cite{neurips16miniImagenet}, a large subset of ImageNet with 100 classes and 84x84 images, designed to be more challenging than CIFAR-100. The results presented in Table \ref{tab:imagenet_experiments} demonstrate that our method generalizes quite well to both high-resolution images as well as more challenging datasets. \myparagraph{Norm-constrained architecture} As explained in Section~\ref{sec:wca}, we control the magnitude of the weights in our architecture by means of $\ell^2$ regularization. Another option to achieve the same effect is to apply norm constraints to the classification vectors $\vec{w_i}$ and covariance matrix $\Sigma$. A detailed explanation of how we apply these norm constraints is given in the supplementary material. In Table~\ref{tab:control_experiments} we report results of a WCA-Net{} variant with a norm-constrained regularizer. Constraint-based regularization still provides good robustness, but is weaker than the $\ell^2$ penalty-based variant. \myparagraph{E1: Importance of EoT} To show the impact of EoT, we also evaluate the test performance without it. Table~\ref{tab:control_experiments} shows that the test performance increases without using EoT. This makes sense as critiqued in ~\citet{pmlr18obfuscated}; one gradient sample is not enough to construct an effective attack. \myparagraph{E2: Average multiple noise samples at test time} Our model's forward pass performs the following: (i) Extract features from the penultimate layer of the backbone, (ii) inject additive noise, and (iii) compute the logits. By default we draw a single noise sample as suggested by our theory. In this experiment, we sample from the distribution multiple times and average the final logits. The more noise samples we average, the more we expect the additive noise to lose its regularization effect. The experimental results in Table~\ref{tab:control_experiments} confirm that using more ($n=10$) samples degrades performance. \myparagraph{E3: Train noise and model independently} In this experiment, we first train the model without injecting any noise. Then, keeping the model parameters frozen we train the noise independently. In Table~\ref{tab:control_experiments} we can see that this variant achieves an elementary level of robustness that is better than the defenseless baseline shown in Table~\ref{tab:comparison_isotropic_anisotropic}, however, not as strong as the isotropic baseline. As mentioned in Section~\ref{sec:wca}, a key insight of Theorem 1 is that the noise and weights should co-adapt. As expected, keeping the weight vectors $\vec w_i$ frozen, overall limits the ways the WCA term (see Eq.~\ref{eq:loss_wca}) can inflate, thus never realizing its full potential. \myparagraph{Adversarial training} Our proposed method only requires clean data for training. To show this, we adversarially train our anisotropic WCA-Net{} in two settings: (i) purely with adversarial examples and (ii) with a mix of clean and adversarial examples. We train with a PGD attack with $\epsilon=8/255$ and $k=10$. From the results in Table~\ref{tab:control_experiments}, we can see that incorporating adversarial training harms our performance on clean data as expected~\cite{iclr15fgsm}; while providing no consistent benefit for adversarial defense. \begin{figure}[t] \centering \includegraphics[width=.49\columnwidth]{Figures/ablation_fgsm.pdf} \includegraphics[width=.49\columnwidth]{Figures/ablation_pgd.pdf} \caption{Evaluation of our model variants (see Table \ref{tab:comparison_isotropic_anisotropic}) for different attack strengths $\epsilon=2^n, \; n \in \{0...7\}$, specifically for the FGSM (left) and PGD (right) attacks on CIFAR-10. Best viewed in color.} \label{fig:ablation_epsilon} \end{figure} \subsection{Inspection of Gradient Obfuscation} \citet{pmlr18obfuscated} proposed a set of criteria to inspect whether a stochastic defense method relies on obfuscated gradients. Following~\citet{iccv19pni}, we summarize these criteria as a checklist. If any item in this checklist holds true, the stochastic defense is deemed unreliable. The following analysis verifies that our model's strong robustness is not caused by gradient obfuscation. \myparagraph{Criterion 1:} One-step attacks perform better than iterative attacks. \myparagraph{Refutation:} Knowing that PGD is an iterative variant of FGSM, we use our existing evaluation to refute this criterion. From the results in Tables \ref{tab:sota_compare_fgsm_pgd_cifar}, \ref{tab:comparison_isotropic_anisotropic} and \ref{tab:control_experiments}, we can see that our WCA-Net{} performs consistently better against FGSM than against PGD. \myparagraph{Criterion 2:} Black-box attacks perform better than white-box attacks. \myparagraph{Refutation:} From Tables~\ref{tab:sota_compare_fgsm_pgd_cifar} and \ref{tab:sota_compare_cw_1px} we observe that FGSM and PGD outperform the 1-pixel attack. In Figure~\ref{fig:ablation_epsilon} we see the effect of increasing the attack strength on both white-box attacks, and they still outperform the stronger 2-, 3- and 5-pixel attacks. \myparagraph{Criterion 3:} Unbounded attacks do not reach 100\% success. \myparagraph{Refutation:} For fair comparison to previous work, FGSM and PGD in this paper are parameterized following~\citet{iccv19pni}. However, for this check we deliberately increase the attack strength of PGD to $\epsilon=255/255$ and number of iterations to $k=20$. We evaluate all of our models against this attack, and they achieve an accuracy of 0\%. \myparagraph{Criterion 4:} Random sampling finds adversarial examples. \myparagraph{Refutation:} To assess this, we hand-pick 100 CIFAR-10 test images that our model successfully classifies during standard testing (100\% accuracy), but misclassifies under FGSM with $\epsilon=8/255$ (0\% accuracy). For each of these test images, we randomly sample 1,000 perturbed images within the same $\epsilon$-ball, and replace the original image if any of the samples result in misclassification. We then evaluate our model on these 100 images to get a performance of 98\%. \myparagraph{Criterion 5:} Increasing the distortion bound doesn't increase success. \myparagraph{Refutation:} Figure~\ref{fig:ablation_epsilon} shows that increasing the distortion bound increases the attack's success. \subsection{Empirical Evaluation of Theorem~\ref{thm:bound}} \begin{figure}[t] \centering \includegraphics[width=.49\columnwidth]{Figures/bound_isotropic.pdf} \includegraphics[width=.49\columnwidth]{Figures/bound_anisotropic.pdf} \caption{Evaluating our bound. Plots of the test set accuracy of SVMs trained on the zero and one digits found in MNIST. We report the performance of models trained with isotropic (left) and anisotropic (right) noise, and the worst-case performance according to Theorem~\ref{thm:bound}. The anisotropic model provides a more robust bound than the isotropic model as well as better empirical performance. Best viewed in color.} \label{fig:bound-plot} \end{figure} To evaluate the tightness of our bound presented in Theorem~\ref{thm:bound}, we train linear Support Vector Machines (SVM) on the zero and one digits found in the MNIST dataset. Using a linear model allows us to compute the numerator using the technique of \citet{gouk2020sspd}, \begin{equation*} \Delta_{\infty}^{\tilde{h}}(\vec x, \epsilon) = \epsilon \|\vec w\|_1, \end{equation*} where $\vec w$ is the weight vector of the SVM. We use principal components analysis to reduce the images to 32 dimensions, and apply learned isotropic and anisotropic noise to these reduced features before classification with the SVM. The covariance matrix and SVM weights are found by minimizing the hinge loss plus the WCA loss term using gradient descent. Results of attacking these models with PGD, and the lower bound on performance as computed by Theorem~\ref{thm:bound}, are given in Figure~\ref{fig:bound-plot}. From these plots we can see: (i) the bound is not violated at any point, corroborating our analysis; (ii) as the strength of the adversarial attack is increased, the bound remains non-vacuous for reasonable (i.e., likely imperceptible) values of the attack strength; and (iii) the model with anisotropic noise is more robust than the model with isotropic noise. This last finding is particularly interesting because in the linear model regime PGD attacks are able to find globally optimal adversarial examples. \begin{figure}[t] \centering \includegraphics[width=.49\columnwidth]{Figures/wca_isotropic_contours.pdf} \includegraphics[width=.49\columnwidth]{Figures/wca_anisotropic_contours.pdf} \caption{Visualisation of our models on F-MNIST with a 2D bottleneck. Contours and arrows indicate noise covariance $\Sigma$ and weights $ \vec w_i$. Left: WCA-Net{} with isotropic noise. Right: WCA-Net{} with anisotropic noise. Evidently, our WCA-Net{} with anisotropic noise allows covariance to be aligned with off-axis weights.} \label{fig:wca_contours} \end{figure} \subsection{Empirical Observations about WCA} Figure~\ref{fig:wca_contours} shows the effect of our regularization methods with a bivariate Gaussian, by plotting the contours of the distribution against the weight vectors of the classification layer. These visualizations are obtained by training our WCA-Net{} variants with a LeNet++ backbone on F-MNIST, with a 2-dimensional bottleneck and 2x2 covariance matrix. We show the following: (i) First, in the left of Figure~\ref{fig:wca_contours}, we can see that the learned noise is axis-aligned since the injected noise is isotropic. Further, we can see that the weight vectors are near-axis-aligned, as WCA pushes them to align with the learned noise. (ii) Then, in the right Figure, due to the combination of anisotropic noise and WCA, our model has weight-aligned noise, and the weights are free to be non-axis-aligned. Overall, we observe better alignment between the learned weight vectors and the eigenvectors of the covariance matrix in our proposed anisotropic WCA-Net{}. \section{Introduction} \label{sec:introduction} It has been shown that deep convolutional neural networks, while displaying exceptional performance in computer vision problems such as image recognition~\cite{cvpr16resnet}, are vulnerable to input perturbations that are imperceptible to the human eye~\cite{iclr14intriguing}. The perturbed input images, known as adversarial examples, can be generated by single-step~\cite{iclr15fgsm} and multi-step~\cite{iclr18pgd,iclr2016bim, sp17cnw} updates using both gradient-based optimization methods and derivative-free approaches~\cite{acmais2017zoo}. This vulnerability raises the question of how one can go about ensuring the security of machine learning systems, thus preventing a malicious entity from exploiting instabilities~\cite{tkde2013security}. In order to tackle this problem, many adversarial defense algorithms have been proposed in the literature. Among them, Stochastic Neural Networks (SNNs) that inject fixed or learnable noise into their hidden layers have shown promising results~\cite{eccv18rse,iclr19advbnn,iccv19pni,cvpr20learn2perturb,aaai2021sesnn}. In this paper, we identify three limitations of the current state-of-the-art stochastic defense methods. First, most contemporary adversarial defense methods use a mixture of clean and adversarial (or even purely adversarial) samples during training, i.e., adversarial training~\cite{iclr15fgsm, iclr18pgd, iclr19advbnn, iccv19pcl, iccv19pni, cvpr20learn2perturb}. However, generating strong adversarial examples during training leads to significantly higher computational cost and longer training time. Second, many existing adversarial defenses~\cite{iccv19pcl}, and especially stochastic defenses~\cite{cvpr20learn2perturb} are heuristically motivated. Although they may be empirically effective against existing attacks, they lack theoretical support. Third, the noise incorporated by existing stochastic models is \emph{isotropic} (i.e., generated from a multivariate Gaussian distribution with a diagonal covariance matrix), meaning that it perturbs the learned features of different dimensions independently. Our theoretical analysis will show that this is a strong assumption and best performance is expected from \emph{anisotropic} noise. We address the aforementioned limitations and propose an SNN that makes use of learnable anisotropic noise. We theoretically analyse the margin between the clean and adversarial performance of a stochastic model and derive an upper bound on the difference between these two quantities. This novel theoretical insight suggests that the anisotropic noise covariance in an SNN should be optimized to align with the classifier weights, which has the effect of tightening the bound between clean and adversarial performance. This leads to an easy-to-implement regularizer, which can be efficiently optimized on clean samples alone without need for adversarial training. We show that our method, called Weight-Covariance Alignment (WCA), can be applied to architectures of varied depth and complexity (namely, LeNet++ and ResNet-18), and achieves state-of-the-art robustness across several widely used benchmarks, including CIFAR-10, CIFAR-100, SVHN and F-MNIST. Moreover, this high level of robustness is demonstrated for both white-box and black-box attacks. We name our proposed model WCA-Net{}. The contributions of our paper are summarized as follows: \begin{itemize} \item While the majority of existing stochastic defenses are heuristically motivated, our proposed method is derived by optimizing a learning theoretic bound, providing solid justification for its robust performance. \item To the best of our knowledge, we are the first to propose a stochastic defense with learned anisotropic noise. \item WCA only requires clean samples for training, unlike most of the current state-of-the art defenses that depend on costly adversarial training. \item We demonstrate the state-of-the-art performance of our method on various benchmarks and provides resilience to both white- and black-box attacks. \end{itemize} \section{Methods} \label{sec:methods} Based on theoretical analysis of how the injected noise can impact generalisation performance, further expanded in Section~\ref{sec:wca}, we propose a weight-covariance alignment loss term that encourages the weight vectors associated with the final linear classification layer to be aligned with the covariance matrix of the injected noise. Consequently, our theory leads us to use anisotropic noise, rather than the isotropic noise typically employed by previous approaches. Our method fits into the family of SNNs that apply additive noise to the penultimate activations of the network. Consider the function, $f(\vec x)$, which implements the feature extractor portion of the network i.e., everything except the final classification layer. Our WCA-Net{} architecture is defined as \begin{equation*} h(\vec x) = W (f(\vec x) + \vec z) + \vec b, \;\; \vec z \sim \mathcal{N}(0, \Sigma), \end{equation*} where $W$ and $\vec b$ are the parameters of the final linear layer, $\vec z$ is the vector of additive noise. The objective function used to train this model is \begin{equation} \label{eq:loss_func} \mathcal{L} = \mathcal{L}_{\text{C}} - \mathcal{L}_{\text{WCA}}, \end{equation} where $\mathcal{L}_{\text{C}}$ and $\mathcal{L}_{\text{WCA}}$ represent the classification loss (e.g. softmax composed with cross entropy) and weight-covariance alignment term respectively. We describe each of our technical contributions in the remainder of this Section. \subsection{Weight-Covariance Alignment} \label{sec:wca} Non-stochastic methods for defending against adversarial examples typically try to guarantee that the prediction for an input image cannot be changed. In contrast, a defense that is stochastic should aim to minimize the probability that the prediction can be changed. In this Section, we present a theoretical analysis of the probability that the prediction of an SNN will be changed by an adversarial attack. For simplicity, we restrict our analysis to the case of binary classification. Denoting a feature extractor as $f$, we define an SNN $h$, trained for binary classification as \begin{equation*} h(\vec x) = \vec w^T (f(\vec x) + \vec z) + b, \;\; \vec z \sim \mathcal{N}(0, \Sigma), \end{equation*} where $\vec w$ is the weight vector of the classification layer and $b$ is the bias. We denote the non-stochastic version of $h$, where the value of $\vec z$ is always a vector of zeros, as $\Tilde{h}$. The margin of a prediction is given by \begin{equation*} m_h(\vec x, y) = y h(\vec x), \end{equation*} for $y \in \{-1, 1\}$. It is positive if the prediction is correct and negative otherwise. The quantity in which we are interested is the difference in probabilities of misclassification when the model is and is not under adversarial attack $\delta$, which is given by \begin{equation} \label{eq:theory_adv_gap} \begin{split} G_{p,\epsilon}^h(\vec x, y) = \max_{\vec \delta : \|\vec \delta\|_p \leq \epsilon} P(m_h(\vec x + \delta, y) \leq 0)\\ - P(m_h(\vec x, y) \leq 0). \end{split} \end{equation} Our main theoretical result, given below, shows how one can take an adversarial robustness bound, $\Delta_p^{\Tilde{h}}(\vec x, \epsilon)$, for the deterministic version of a network, and transform it to a bound on $G$ for the stochastic version of the network. \begin{theorem} \label{thm:bound} The quantity $G_{p,\epsilon}^h(\vec x, y)$, as defined above, is bounded as \begin{equation*} G_{p,\epsilon}^h(\vec x, y) \leq \frac{\Delta_p^{\Tilde{h}}(\vec x, \epsilon)}{\sqrt{2 \pi\vec w^T \Sigma \vec w }}, \end{equation*} where the robustness of the deterministic version of $h$ is known to be bounded as $|\Tilde{h}(\vec x) - \Tilde{h}(\vec x + \vec \delta)| \leq \Delta_p^{\Tilde{h}}(\vec x, \epsilon)$ for any $\|\vec \delta\|_p \leq \epsilon$. \end{theorem} The proof is provided in the supplemental material. We can see from Theorem~\ref{thm:bound} that increasing the bi-linear form, $\vec w^T \Sigma \vec w$, of the noise distribution covariance and the classifier reduces the gap between clean and robust performance. As such, we define the loss term, \begin{equation} \label{eq:loss_wca} \mathcal{L}_{\text{WCA}} = \sum_{i=1}^C \text{ln}(\vec w_i^T \Sigma \vec w_i), \end{equation} where $C$ is the number of classes in the classification problem, and $\vec w_i$ is the weight vector of the final layer that is associated with class $i$. We found that including the logarithm results in balanced growth rates between the $\mathcal{L}_C$ and $\mathcal{L}_{\text{WCA}}$ terms in Eq.~\ref{eq:loss_func} as training progresses, hence improving the reliability of training loss convergence. The key insight of Theorem~\ref{thm:bound}, operationalized by Eq.~\ref{eq:loss_wca}, is that the noise and weights should co-adapt to align the noise and weight directions. We call this loss Weight-Covariance Alignment (WCA) because it is maximized when each $\vec w_i$ is well-aligned with the eigenvectors of the covariance matrix. This WCA loss term runs into the risk of maximizing the magnitude of $\vec w$, rather than encouraging alignment or increasing the scale of the noise. To avoid the uncontrollable scaling of network parameters, it is common practice to penalize large weights by means of $\ell^2$ regularization: \begin{equation*} \mathcal{L} = \mathcal{L_C} - \mathcal{L}_{WCA}+\lambda \vec w^T \vec w, \end{equation*} where $\lambda$ controls the strength of the penalty. In our case, we apply the $\ell^2$ penalty when updating the parameters of the classification layer and the covariance matrix. Another approach to limiting parameter magnitude would be to enforce norm constraints on $\vec{w}$ and $\Sigma$, e.g., using a projected subgradient method at each update. We provide more details of this alternative in the supplementary material. Empirically, we found that the penalty-based approach outperformed the constraint-based approach, so we focus on the former by default. \subsection{Injecting Anisotropic Noise} In contrast to previous work that only considers injecting isotropic Gaussian noise \cite{iclr19advbnn,iccv19pni,cvpr20learn2perturb,aaai2021sesnn}, we make use of anisotropic noise, providing a richer noise distribution than previous approaches. Crucially, it also means that the principal directions in which the noise is generated no longer have to be axis-aligned. I.e., prior work suffers from the inability to simultaneously optimise alignment between noise and weights (required to minimise the adversarial gap bounded by Theorem~\ref{thm:bound}), and freedom to place weight vectors off the axis (required for good clean performance). Our use of anisotropic noise in combination with WCA encourages alignment of the weight vectors with the covariance matrix eigenvectors, while allowing non-axis aligned weights, thus providing more freedom about where to place the classification decision boundaries. Previous approaches are able to train the variance of each dimension of the isotropic noise via the use of the ``reparameterization trick'' \cite{kingma2014vae}, where one samples noise from a distribution with zero mean and unit variance, then rescales the samples to get the desired variance. Because the rescaling process is differentiable, this allows one to learn variance jointly with the other network parameters with backpropagation. In order to sample anisotropic noise, one can instead sample a vector of zero mean unit variance and multiply this vector by a lower triangular matrix, $L$. This lower triangular matrix is related to the covariance matrix as \begin{equation*} \Sigma = L \cdot L^T. \end{equation*} This guarantees that the covariance matrix remains positive semi-definite after each gradient update. \section{Related Work} \label{sec:related-work} \subsection{Adversarial Attacks} We consider the standard threat model, where the attacker can construct norm-bounded perturbations to a clean input. First-order white-box adversaries use the gradient with respect to the input image to perturb it in the direction that increases misclassification probability. The attack can also be targeted or untargeted, depending on whether a specific misclassification is required~\cite{iclr15fgsm, iclr2016bim, iclr18pgd, sp17cnw}. By default, we consider the untargeted variants of these attacks. The simplest first-order adversary is the Fast Gradient Sign Method (FGSM), proposed in~\citet{iclr15fgsm}. The attack adds a small perturbation to the input in the direction indicated by the sign of the gradient of the classification loss, $\mathcal{L}$, w.r.t. the input, $\vec x$, controlled by an attack strength $\epsilon$, \begin{equation*} \vec x^\prime = \vec x + \epsilon \cdot \operatorname{sign}(\nabla_{\vec x} \mathcal{L}(h(\vec x), y)), \end{equation*} where $h$ is the target model. \citet{iclr2016bim} upgraded this single-step attack to a multi-step version named Basic Iterative Method (BIM) with iterative updates and smaller step size at each update. Though BIM works effectively, \citet{iclr18pgd} demonstrated that randomly initializing the perturbation generated by BIM, and then making multiple attempts to construct an adversarial example results in a stronger adversarial attack known as Projected Gradient Descent (PGD). Another white-box attack of slightly different nature is the C\&W attack~\cite{sp17cnw}, which aims to find an input perturbation $\delta$ that maximizes the following objective: \begin{equation*} \begin{aligned} \mathcal{L}(h(\vec x + \delta), y) - ||\delta||_{p}\\ s.t. \quad \vec x + \delta\in [0, 1]^{n}, \end{aligned} \end{equation*} where $p$ is commonly chosen from $\{0,2,\infty\}$. Different from the white-box attacks, black-box attacks assume the details of the targeted model are unknown, and one can only access the model through queries. Therefore, in order to attack a target model in this case, one typically trains a substitute of it~\cite{acm2017practicalbbox} and generates an attack using the queried prediction of the target model and the local substitute. Also, instead of training a substitute for the target model, zero-order optimization methods~\cite{acmais2017zoo,ec19onepixel} have been proposed to estimate the gradients of the target model directly. In this paper, we demonstrate that our proposed method is robust against both white- and black-box attacks. \subsection{Stochastic Adversarial Defense} Recent work has shown that SNNs yield promising performance in adversarial robustness. This can be achieved by injecting either fixed~\cite{eccv18rse} or learnable~\cite{iccv19pni, cvpr20learn2perturb, aaai2021sesnn} noise into the models. The idea behind Random Self Ensemble (RSE)~\cite{eccv18rse} is that one can simulate an ensemble of virtually infinite models while only training one. This is achieved by injecting additive spherical Gaussian noise into various layers of a network and performing multiple forward passes at test time. Though simple, it effectively improves the model robustness in comparison to a conventional deterministic model. RSE treats the variance of the injected noise as a hyperparameter that is heuristically tuned, rather than learned in conjunction with the other network parameters. In contrast, \citet{iccv19pni} propose Parametric Noise Injection (PNI), where a fixed spherical noise distribution is controlled by a learnable ``intensity'' parameter, further improving model robustness. The authors show that the noise can be incorporated into different locations of a neural network, i.e., it is applicable to both feature activations and model weights. The injected noise is trained together with the model parameters via adversarial training. Learn2Perturb (L2P)~\cite{cvpr20learn2perturb} is a recent extension of PNI. Instead of learning a single spherical noise parameter, L2P learns a set of parameters defining an isotropic noise perturbation-injection module. The parameters of the perturbation-injection module and the model are updated alternatingly in a manner named ``alternating back-propagation'' by the authors, using adversarial training. Finally, SE-SNN~\cite{aaai2021sesnn} introduces fully-trainable stochastic layers, which are trained for adversarial robustness by adding a regularization term to the objective function that maximizes the entropy of the learned noise distribution. Unlike the other SNNs, but similarly to ours, SE-SNN only requires clean training samples. Although conceptually related to the aforementioned stochastic defense methods, WCA-Net{} differs in several important aspects: WCA-Net{} is the first stochastic model to inject learnable \emph{anisotropic} noise into the latent features. Our approach is derived from from optimization of a learning theoretic bound on the adversarial generalisation performance of SNNs, which motivates the use of anisotropic noise. WCA-Net{} does not require adversarial training and can be optimized with clean samples alone, and is therefore simpler and more efficient to train. Another class of stochastic defenses apply noise to the input images, rather than injecting noise to intermediate activations~\citep{pinot2019, cohen2019, li2019, lee2019}. From a theoretical point of view, this can be seen as ``smoothing'' the function implemented by the neural network in order to reduce the amount the output of the network can change when the input is changed only slightly. This type of defense can be considered a black-box defense, in the sense that it does not actually involve regularizing the weights of the network --- it only modifies the input. While interesting, it has primarily been applied in scenarios where one is using a model-as-a-service framework, and cannot be sure if the model was trained with some sort of adversarial defense or not~\citep{cohen2019}.
proofpile-arXiv_059-15729
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Preface} At this stage in the development of the theory of core-collapse supernovae two possible explosion mechanisms are most often discussed: neutrino-driven and magneto-rotationally-driven. The state of the theory is not sufficiently well developed to determine whether or not there is a clear break between these two cases or whether they represent limiting cases of a continuum. Nonetheless, in any scenario, the physics discussed here is relevant. It either dominates, leading to a neutrino-driven explosion, or sets the stage for a magneto-rotationally-driven supernova. That is, core-collapse supernova theorists have no choice but to first master and, more important, implement realistic models of neutrino transport in core-collapse supernova environments. What is meant by ``realistic'' will hopefully become clear as we progress through this review, but what will also hopefully become clear: Challenges to achieving realism will be faced on multiple fronts: physical, numerical, and computational. When charged to write this review, we were asked not to provide an encyclopedic review of past work in the field but, rather, to present the current issues and challenges faced by the core-collapse supernova modeling community, particularly as they pertain to what is arguably the most difficult aspect to model: neutrino transport. Thus, with this charge in mind, we have written our review with an emphasis on the future, on what modelers must and will face to develop realistic models of these most important events. \section{Setting the stage} \label{sec:SettingTheStage} The idea that core-collapse supernovae could be neutrino driven was first proposed more than fifty years ago by \citet{CoWh66} in their seminal numerical study. This work set neutrinos front and center in core-collapse supernova theory, which has remained the case ever since. The Colgate and White studies were followed by the early studies of \citet{Wilson1971} that cast doubt on the efficacy of their proposal. But the development of the electroweak theory, which predicted the existence of weak neutral currents, would change all that. Given weak neutral currents, Freedman recognized that it would be possible for neutrinos to scatter off of the nucleons in a nucleus \emph{collectively}. The cross sections for such scattering would be proportional to the nuclear neutron number, $N$, and would consequently be large. Shortly thereafter, \citet{Wilson1974}, using the new weak interaction cross sections for this process, demonstrated that the Colgate and White proposal was in fact viable. The recognition of this intertwined relationship between core-collapse supernova physics and neutrino weak interaction physics drives continued research to this day. Nearly forty years of further study in the context of the assumption of spherical symmetry was set in motion by this early and foundational work, which traversed a range of descriptions of neutrino transport in stellar cores, a range of sophistication of the treatment of the microphysics input included in the models, which includes the neutrino weak interaction physics and the equations of state describing a stellar core's nuclear, leptonic, and photonic degrees of freedom. Neutrino mediation of core-collapse supernova dynamics in its modern instantiation is through charged-current absorption of electron neutrinos and antineutrinos on neutrons and protons, respectively. The nucleons become available as the stalled supernova shock wave dissociates the nuclei in the infalling stellar core material as the material passes through it. The neutrino absorption heats the material, depositing energy behind the shock. The shock loses energy initially to dissociation and neutrino losses. When sufficient energy is deposited by neutrino heating, the shock again becomes dynamical, propagates outward in radius, and reverses the infall of material passing through it, to disrupt the star in a core-collapse supernova \citep{Wilson1985,BeWi85}. This modern instantiation of neutrinos' role in the supernova mechanism relies on the developments surrounding the large neutrino--nucleus scattering cross sections discussed earlier. \citet{Arnett1977} was the first to show that such cross sections led to the trapping of the electron neutrinos produced during stellar core collapse through electron capture on nuclei and protons. He demonstrated that, despite their nature as weakly interacting particles, the densities in the stellar core rise sufficiently rapidly to render the electron neutrino mean free paths smaller than the size of the stellar core. Neutrino trapping gives rise to a trapped degenerate sea of electron neutrinos in the inner stellar core that emerge after stellar core bounce and the launch of the supernova shock wave from the proto-neutron star on diffusive time scales. \begin{figure}[htbp] \includegraphics[width=\textwidth]{PNSGainShockProfile.pdf} \caption{Schematic showing the characteristic structure after stellar core bounce and the stall of the supernova shock wave seen in all core-collapse supernova models. All three flavors of neutrinos, together with their antineutrino partners, emanate from the proto-neutron star. Here, a single surface characterizes the proto-neutron star surface and the ``neutrinosphere,'' the surface of last scattering for the neutrinos. In reality, there are multiple surfaces, although they are very close together. The neutrino interaction cross sections are flavor and energy dependent. Consequently, there is a neutrinosphere for each neutrino flavor and energy ``group'' in core-collapse supernova models. Between the proto-neutron star surface and the stalled shock wave is the so called gain radius, separating the region of net neutrino cooling (below the gain radius) from net neutrino heating (above the gain radius). Neutrino heating is mediated by charged-current absorption of electron neutrinos (antineutrinos) on neutrons (protons) below the shock, liberated by shock dissociation of nuclei as they pass through it. Cooling is mediated by the inverse weak interactions. Neutrino heating in the ``gain region'' between the gain radius and the shock is central to the neutrino-driven core-collapse supernova mechanism. Given this neutrino heating, the gain region becomes convectively unstable. Neutrino-driven turbulent convection in this region assists neutrino heating to generate a supernova. The goal is to reverse the infall of the material ahead of the shock and for the shock itself to propagate outward. The neutrino heating in the gain region is sensitive to the neutrino luminosities, spectra, and angular distributions there, all of which depend on the transport of neutrinos through the semitransparent neutrinospheric region, where the neutrinos are neither diffusive nor free streaming.} \label{fig:PNSGainShockProfile} \end{figure} The proto-neutron star comprises the inner cold unshocked core and a hot shocked mantle of material above it that is not ejected by the shock. Electron degeneracy is lifted in the hot mantle, leading to a significant population of electron--positron pairs, which in turn leads to the production of neutrinos and antineutrinos of all three flavors via electron--positron annihilation. The densities in the mantle are sufficiently high that neutrinospheres for all three flavors of neutrinos and antineutrinos exist, all lying within kilometers of each other, as a function of flavor and energy, in the density cliff that defines the proto-neutron star surface. The post-bounce stratification of the core, setting the stage for neutrino shock revival is shown in Fig.~\ref{fig:PNSGainShockProfile}. Neutrinos of all three flavors emerge from their respective neutrinospheres at the proto-neutron star surface. Between the proto-neutron star surface and the shock, neutrino heating and cooling take place through charged-current electron neutrino and antineutrino absorption on and emission by nucleons, respectively. The different radial dependencies of neutrino heating and cooling lead to net heating above the ``gain radius'' and net cooling below it. The region between the gain radius and the shock, where net neutrino heating takes place, is known as the gain region. The energy deposition rate per gram of material in the gain region can be expressed in terms of the electron neutrino and antineutrino luminosities, squared rms energies, and inverse flux factor as \begin{equation} \dot{\epsilon}=\frac{X_n}{\lambda_{0}^{a}}\frac{L_{\nu_e}}{4\pi r^2} \left\langle E^{2}_{\nu_e} \right\rangle \left\langle \frac{1}{\mathcal{F}_{\nu_e}} \right\rangle +\frac{X_p}{\bar{\lambda}_{0}^{a}}\frac{L_{\bar{\nu}_e}}{4\pi r^2} \left\langle E^{2}_{\bar{\nu}_e} \right\rangle \left\langle \frac{1}{\mathcal{F}_{\bar{\nu}_e}} \right\rangle, \label{eq:heatingrate} \end{equation} where $\epsilon$ is the internal energy of the stellar core fluid per gram, $X_{n,p}$ are the neutron and proton mass fractions, respectively, $L_{\nu_e,\bar{\nu}_e}$ are the electron neutrino and antineutrino luminosities, respectively, $\mathcal{F}_{\nu_e,\bar{\nu}_e}$ are the inverse flux factors for the electron neutrinos and antineutrinos, respectively, and $\lambda_{0}^{a}, \bar{\lambda}_{0}^{a}$ are constants related to the weak interaction coupling constants. Thus, knowledge of the neutrino luminosities, spectra, and angular distributions are needed to compute the neutrino heating rates. This requires knowledge of the neutrino distribution functions, $f_{\nu_e,\bar{\nu}_e}(r,\theta,\phi,E,\theta_{p},\phi_{p},t)$, from which these quantities can be calculated. The neutrino distribution functions are determined by solving their respective Boltzmann kinetic equations, which will be discussed later. Thus, the core-collapse supernova problem is a phase space problem, in the end involving 6 dimensions plus time. The common parlance, dividing core-collapse supernova models between ``1D'' (spherical symmetry), ``2D'' (axisymmetry), or ``3D'' models is quite misleading. In reality, the dimensionality is 3D for spherical symmetry, involving 1 spatial dimension (radius) and 2 momentum-space dimensions (neutrino energy and a single direction cosine), 5D for axisymmetry, involving 2 spatial dimensions (radius and $\theta$) and 3 momentum-space dimensions (neutrino energy and 2 direction cosines), and 6D, involving 3 spatial dimensions (radius, $\theta$, and $\phi$) and 3 momentum-space dimensions (neutrino energy and 2 direction cosines). The central densities of the proto-neutron star reach values between $10^{14}$ and $10^{15}\mathrm{\ g\ cm}^{-3}$. Its mass, which is $O(1)\,M_\odot$, is initially contained within a radius $O(100)$ km. Such conditions are not Newtonian. Detailed comparisons made in the context of spherically symmetric models of core-collapse supernovae \citep{BrDeMe01} between Newtonian and general relativistic models revealed the dramatic differences in the overall ``compactification'' of the postbounce core configuration defined by the neutrinosphere, gain, and shock radii, as well as the dramatic difference between the magnitudes of the infall velocities through the gain region. Moreover, neutrino luminosities and rms energies were increased in the general relativistic case due to the higher core temperatures. These studies made obvious the fact the core-collapse supernova environment is a general relativistic environment. Models that assume spherical symmetry reached the needed level of sophistication only fairly recently, with fully general relativistic models that included Boltzmann neutrino transport, an extensive set of neutrino weak interactions, and, at the time, an industry-standard equation of state \citep{LiMeTh01,LeMeMe12a}. The outcomes of these models were quite discouraging. In all cases, the shock radius reaches a maximum and then recedes with time until the simulations are terminated. Explosion does not occur, and the end outcome of each simulation would be the formation of a stellar-mass black hole. With the exception of the lowest-mass massive stars \citep{KiJaHi06} it became clear the Colgate and White proposal was doomed to fail without the aid of additional physics. Specifically, the assumption of spherical symmetry had to be eliminated. In retrospect, it is now obvious why: Neutrino emission by the proto-neutron star, driving the explosion above, is fueled by the accretion of stellar core material onto it. Explosion in spherical symmetry would cut off such accretion entirely once initiated, cutting off the fuel that drives the neutrino emission that drives the explosion. Unless accretion and explosion can occur simultaneously, we are presented with a Goldilocks problem: Enough energy has to be deposited behind the shock before explosion occurs. But for sufficiently energetic explosions, an explosion cannot occur too soon. And given that the accretion rates decrease with time, due to stellar core density profiles, an explosion also cannot occur too late. The first two-dimensional core-collapse supernova simulations by \citet{HeBeCo92,HeBeHi94} demonstrated that accretion and explosion naturally coexist in the postshock flow. Heating by the proto-neutron star from below generates convection in the gain region. Such ``neutrino-driven'' convection allows continued accretion while some of the material is heated, expands, and moves outward. Lower-entropy, accreting fingers are evident in Fig.~\ref{fig:NDConvection}, as well as higher-entropy rising plumes. The Herant et~al.\ studies opened the next, much-needed chapter in core-collapse supernova theory. As with spherically symmetric modeling, axisymmetric modeling continues to this day. (See \citealt{Mueller2020} for a focused and comprehensive review on convection and other fluid instabilities in core collapse supernova environments that are integral to the supernova explosion mechanism.) \begin{figure}[htb] \includegraphics[width=\textwidth]{NDConvection} \caption{Snapshot of neutrino-driven convection at 25 ms after bounce in the two-dimensional core-collapse supernova model of \citet{HeBeHi94} initiated from a $25\,M_\odot$ progenitor.} \label{fig:NDConvection} \end{figure} The core-collapse supernova modeling community has not yet produced general relativistic axisymmetric models with Boltzmann neutrino transport and with industry-standard weak interaction physics and equations of state, but significant progress has been made. The first simulations to evolve both the neutrino spectra and their angular distributions were performed by \citet{OtBuDe08}. Included were the spatial advection terms on the left-hand side of the Boltzmann equation (corresponding to neutrino transport in each of the spatial dimensions) and the collision term on the right-hand side of the equation (corresponding to neutrino sources and sinks due to emission, absorption, and scattering) with a subset of the weak interactions considered complete today. The simulations were purely Newtonian. Neglected were all relativistic effects in the Boltzmann kinetic equations, describing special relativistic Doppler shift of neutrino energies, general relativistic blue and red shift of neutrino energies, angular aberration of neutrino propagation, etc. Outcomes from their multi-angle, multi-frequency approach were compared with outcomes from a similar simulation performed with multigroup flux-limited diffusion. Notable differences were obtained between the two transport approaches in the results obtained for neutrino radiation field quantities entering the expression for neutrino heating, Eq.~\eqref{eq:heatingrate}---specifically, the inverse flux factors and rms energies --, which translated into notable differences in neutrino heating, which were up to a factor of 3 for rapidly rotating cores. More recent studies assuming axisymmetry by \citet{NaIwFu18} implemented special relativistic Boltzmann neutrino transport with a subset of the neutrino weak interactions regarded as essential in today's leading multi-physics models, coupled to Newtonian hydrodynamics and gravity. In light of their Boltzmann implementation, these authors were able to make assessments regarding the fundamental assumption at the heart of the most commonly used closure prescription---the so-called M1 closure---currently in use in most multi-dimensional supernova studies deploying multidimensional neutrino transport in a moments approach we will discuss shortly. Nagakura et~al.\ find that the assumption that the neutrino radiation field is not in fact axisymmetric about the outward radial direction, reflected in non-negligible off-diagonal components of the Eddington tensor---specifically, $k^{r\theta}$. The authors emphasize how such components play a non-negligible role in the evolution of the neutrino fluxes, increasing the neutrino luminosities by $\sim$10\%. The neutrino heating rate, Eq.~\eqref{eq:heatingrate}, is then increased commensurately. Experience has shown that corrections at this level in any or all of the quantities entering the neutrino heating rate are noteworthy and warrant continued exploration, perhaps for all models, but especially in light of marginal cases of explosion for some, perhaps many, progenitors. \begin{figure}[htbp] \includegraphics[width=\textwidth]{RbR.pdf} \caption{Schematic showing the key characteristics of the ray-by-ray neutrino transport approximation. Along each radial ray (e.g., along segments DB or HF), a complete solution to the spherically symmetric neutrino transport equations is obtained assuming spherical conditions given by the conditions along each ray. This approximation afforded the ability to implement sophisticated transport solvers that had been developed in the context of models of core-collapse supernovae assuming spherical symmetry, at the expense of ignoring \emph{net} lateral transport that would occur in multiple spatial dimensions. In spherical symmetry, neutrinos can propagate along the segment AB, which is clearly not a purely radial segment. Therefore, there \emph{is} lateral transport. However, in spherical symmetry, every neutrino propagating along AB is matched by a neutrino propagating along CB, and the net flux at point B is purely radial. The lateral fluxes cancel exactly. Focusing on neutrino heating at point B, the ray-by-ray approach assumes that the thermodynamic conditions across the proto-neutron star surface (i..e., the neutrinosphere) between points A and C are uniform and given by the thermodynamic conditions at point D. Given a temporary hot spot at point D on the surface, the neutrino heating at point B would be overestimated. Moreover, were point H significantly cooler, relatively speaking, at the same instant, heating at point F would be underestimated because the hot spot at point D would be ignored even though it is within the cone of neutrino trajectories contributing to the neutrino heating at F. Thus, the ray-by-ray approximation may lead to larger angular variations in the neutrino radiation field than would be present were three-dimensional transport used---particularly if the hot spots on the proto-neutron star surface persist.} \label{fig:RbR} \end{figure} Not unexpectedly, given the physical complexity and the computational cost, no simulations have been performed to date that deploy three-dimensional general relativistic Boltzmann neutrino transport in general relativistic core-collapse supernova models---i.e., including general relativistic hydrodynamics and gravity. This is a long-term goal and, as made clear by what we have learned in the context of studies in spherical and axisymmetry, a needed goal. Nonetheless, three-dimensional core-collapse supernova modeling of increasing sophistication is ongoing. The first three-dimensional core-collapse supernova models were performed by \citet{FrWa04} using gray (neutrino angle- and energy-integrated) radiation hydrodynamics. The first spectral (neutrino-angle integrated) three-dimensional models were performed by \citet{HaMuWo13}. The current stable of spectral three-dimensional models fall under two categories. Both implement spectral (but not multi-angle) neutrino transport in a one- or two-moment approach. In one category, the so-called ``ray-by-ray'' approximation is used. In the other, the neutrino transport is three dimensional. (A clarifying remark: The simulations by Hanke et al. used a Boltzmann solver in the context of their ray-by-ray approach. As such, some angular dependence was kept. However, three-dimensional models require two angles to describe a neutrino's propagation direction, and in the ray-by-ray approach the angular dependence in one of the angles is approximate in the sense that it is computed assuming spherical symmetry.) The earliest three-dimensional models---e.g., those of Hanke et~al.\---implemented ray-by-ray transport. In the ray-by-ray approach, the three-dimensional neutrino transport problem is broken up into $N=N_{\theta}\times N_{\phi}$ spherically symmetric problems, where $N_{\theta,\phi}$ are then number of $\theta$, $\phi$ zones used in the simulation. The ray-by-ray approximation follows lateral neutrino transport under the assumption of spherical symmetry, meaning there is lateral transport of individual neutrinos, but the net lateral flux is zero. (For example, neutrinos can propagate along the segment between A and B in Fig.~\ref{fig:RbR}, but an equal number of neutrinos must propagate along the path between C and B, such that the net flux at point B is purely radial.) Moreover, as illustrated by Fig.~\ref{fig:RbR}, neutrino heating at a point in the gain region may be over- or under-estimated. Consider the point B in the heating region. The backward cone emanating from point B subtends a portion of the neutrinosphere, between points A and C, that is the source of the neutrinos that heat the material at point B. The ray-by-ray approximation, which assumes spherical symmetry for each ray, assumes that the thermodynamic conditions across the neutrinosphere between points A and C are the same as those at point D. If point D is a hot spot, the ray-by-ray approximation will compute the heating at point B assuming the neutrinosphere between points A and C is hot. For neutrino heating at point F, and assuming that point H is not a hot spot, the ray-by-ray approximation will assume that conditions at point H are mimicked across the portion of the neutrinosphere between points E and G, regardless of the fact that point D is hot and within that portion of the surface. Thus, the neutrino heating at point B will be overestimated, whereas the neutrino heating at point F will be underestimated. Whether or not the ray-by-ray approximation leads to significant over- or under-estimations of the neutrino heating over the course of the shock reheating epoch will of course depend on whether or not such variations in the thermodynamic conditions across the neutrinosphere persist, which requires a comparison taking into consideration the time dependence of such thermodynamic conditions. Comparisons between ray-by-ray and non-ray-by-ray approaches in the context of axisymmetric core-collapse supernova models found notable differences in, among other outcomes, the time to explosion \citep{SkBuDo16}. However, more recent comparisons in the context of three-dimensional models found no significant differences between the two approaches \citep{GlJuJa19}. Of course, without three-dimensional transport implementations, it would be difficult to assess the efficacy of using the ray-by-ray approach, or other approximations. In the end, such approximations must be removed, if only just to check them. The ray-by-ray approach of the Oak Ridge group is based on one-moment closure through flux-limited diffusion \citep{BrBlHi20}. They follow the evolution for the lowest angular moment of the neutrino distribution: the number density. The Max Planck group's ray-by-ray implementation is based on two-moment closure \citep{RaJa00}. They solve an approximate Boltzmann equation for the purposes of computing the variable Eddington factor needed to close the system of equations describing the evolution of the first two moments of the neutrino distribution (in spherical symmetry, there is only one first moment, corresponding to the radial number flux, together with the zeroth moment, the neutrino number density). For both two- and three-dimensional core-collapse supernova models that attempt to include general relativity at some level of approximation, if not exactly, Newtonian or general relativistic hydrodynamics, and two- or three-dimensional neutrino transport are all based on the solution of the neutrino moments equations describing the evolution of the lowest angular moments of the neutrino distribution function. For example, in terms of the neutrino distribution function, the number moments (spectral number density, spectral number flux) are defined as \begin{equation} \mathcal{N}(r,\theta,\phi,E,t)\equiv\int_{0}^{2\pi}d\phi_p\int_{-1}^{+1}d\mu f(r,\theta,\phi,\mu,\phi_p,E,t), \label{eq:zerothmoment} \end{equation} \begin{equation} \mathcal{F}^{i}(r,\theta,\phi,E,t)\equiv\int_{0}^{2\pi}d\phi_p\int_{-1}^{+1}d\mu n^{i}f(r,\theta,\phi,\mu,\phi_p,E,t), \label{eq:firstmoment} \end{equation} where $\mu\equiv\cos\theta_p$ is the neutrino direction cosine defined by $\theta_p$, one of the angles of propagation defined in terms of the outward pointing radial vector defining the neutrino's position at time $t$. In three dimensions, two angles are needed to uniquely define a neutrino propagation direction. The angle $\phi_p$ provides the second. $n^i$ is the neutrino direction cosine in the $i^{\rm th}$ direction, whose components are given as functions of $\mu$ and $\phi_p$. $E$ is the neutrino energy. $E,\theta_p,\phi_p$ can be viewed as spherical momentum space coordinates. Above, $\mathcal{N}$ and $\mathcal{F}^i$ are the number density and number fluxes, respectively. In three dimensions, there is of course a number flux for each of the three spatial dimensions, delineated by the superscript $i$. Integration of the neutrino Boltzmann equation over the angles $\theta_p$ and $\phi_p$, weighted by $1$, $n^i$, $n^{i}n^j$, ... defines an infinite set of evolution equations for the infinite number of angular moments of the distribution function, which is obviously impossible to solve. In a moments approach to neutrino transport, the infinite set of equations is rendered finite by truncation, after the equation for the zeroth moment in the case of one-moment closure (e.g., flux-limited diffusion) or after the equations for the first moments in the case of two-moment closure (e.g., M1 closure). In the latter case, closure can be ``prescribed'' (e.g., M1 closure) or computed (e.g., through a variable Eddington tensor approach). We will discuss these approaches in greater detail later in our review. It is important to understand the essence of the approximations being made in moments approaches to neutrino transport in core-collapse supernova models. One does not integrate out all of the angular information contained within the neutrino distribution function. Some angular information remains. The higher the closure is made in the order of moment equations, the more angular information is kept. For example, two-moment closure keeps the fundamental angular dependencies. The ratio of the number flux in any of the three dimensions to the number density, at any spacetime point, is a measure of how forward peaked the neutrino angular distribution is in that dimension at that point. Thus, a moments approach retains much of the information of the neutrino radiation field contained within the neutrino distribution functions, while providing a sophisticated modeling path forward that is achievable on present leadership-class computing systems. Direct Boltzmann solutions for the neutrino radiation field will have to wait until sustained exascale computing platforms become available over the next decade. Three-dimensional models that include an approximation to general relativistic gravity in the form of an ``effective potential,'' Newtonian hydrodynamics, ray-by-ray one- or two-moment neutrino transport with some corrections for special relativity ($O(v/c)$) or general relativity (gravitational redshift of neutrino energies), and a state-of-the-art set of neutrino weak interactions have been performed by the Max Planck and Oak Ridge groups \citep{HaMuWo13,LeBrHi15,MeJaMa15,MeJaBo15,SuJaMe18}. Three-dimensional models that include general relativistic hydrodynamics and gravity, and three-dimensional, general relativistic, $O(1)$ or fully relativistic (special and general) two-moment neutrino transport with an extensive set of neutrino weak interactions have been performed by \citet{RoOtHa16} and \citet{KuTaKo16}, respectively. Three-dimensional models that couple Newtonian hydrodynamics and approximate general relativistic gravity, as above, to three-dimensional two-moment neutrino transport with corrections for special and general relativity, as above, and an extensive set of neutrino weak interactions were performed by \citet{OcCo18,VaBuRa19,BuRaVa19}. It is clear the core collapse supernova modeling state of the art in three dimensions is evolving, with some models classifiable as more complete macrophysically --- i.e., that implement three-dimensional, general relativistic gravity, hydrodynamics, and neutrino transport --- and some models classifiable as more complete microphysically --- i.e., that include state-of-the-art microphysics. Note that the computational cost associated with the solution of the neutrino moment or Boltzmann transport equations is dominated by the computations associated with the ``collision term''---i.e., with the neutrino interactions with the stellar core fluid. It is also clear---and of course at this point should come as no surprise---the above history of the development of core-collapse supernova theory over the last fifty-plus years centers on the development of neutrino transport theory and its implementation in this context. Neutrino mass, albeit small in relation to the neutrino energies attained in core collapse supernovae, leads to neutrino flavor transformations. There is growing, though still inconclusive, evidence that such transformations may play a role in neutrino shock reheating (e..g., see \cite{TaHuRa17,AbDuSu19,AzYaMo19}). The existence of so called ``fast'' flavor transformations, which can exist even in the baryon-laden environment below the supernova shock wave, was first brought to the attention of the supernova modeling community by Sawyer (\cite{Sawyer05}). Prior to this work, it was assumed that quantum mechanical coherence among the neutrinos in the region beneath the shock would de-cohere due to neutrino--matter collisions, thereby rendering such effects unimportant to neutrino shock reheating. However, fast modes operate on scales much shorter than a neutrino mean free path and, in fact, are not wiped out by collisions and beg to be considered. As in the classical case, the story boils down to capturing the neutrino angular distributions for all three flavors of neutrinos, as a function of space and time during the evolution of the supernova. The neutrinospheres for the three neutrino flavors are distinguished first and foremost by their interactions with the stellar fluid, with electron neutrinos and antineutrinos interacting through both charged and neutral currents and the muon and tau neutrinos interacting only through neutral currents. Moreover, the preponderance of neutrons over protons reduces the opacity of the stellar fluid to electron antineutrinos, and a hierarchy sets in, with the muon and tau neutrinospheres at the highest densities, followed by the neutrinosphere associated with the electron antineutrinos, followed in turn by the neutrinosphere associated with the electron neutrinos, at the lowest densities, relatively speaking. Given the layering of the neutrinospheres, at a given time during neutrino shock revival, the neutrino angular distributions at a given spatial location in the cavity between the neutrinospheres and the shock will differ by flavor. It is the differences between the angular distributions of each flavor that sets the stage for fast flavor transformation. Thus, the need, in the classical case, for a Boltzmann description of the neutrino radiation field is multifold: (1) Moments approaches are approximations, whose efficacy cannot be known {\em a priori} and must be checked against the exact (classical) result. Examples of this will be discussed here. (2) The development of closure prescriptions for moment models is rife with difficulty, partially because of nonlinearities introduced by the closure procedure. For example, a numerical method for two-moment, multifrequency, general relativistic neutrino transport that respects Fermi--Dirac statistics does not yet exist and will be difficult to develop. Furthermore, the development of nonlinear moment models beyond the two-moment approximation, to capture more kinetic effects, will be even more challenging. (3) Boltzmann and low-order moments approaches can be used together to accelerate convergence of the solution to the Boltzmann equation, potentially becoming competitive, in terms of speed and memory use, with nonlinear, high-order moments approaches. (4) The exploration of fast flavor transformations on the core collapse supernova mechanism will require precise knowledge of the neutrino angular distributions for all three flavors across spacetime of a supernova model. Such information can be obtained only through a solution of the classical Boltzmann kinetic equations for each neutrino flavor in association with simulation of the coherent quantum effects -- i.e., through a solution of the multi-angle, multi-frequency neutrino {\em quantum} kinetics equations for all neutrino flavors. While the justification for deploying Boltzmann kinetics in the classical case can be made, it is through a combination of Boltzmann and moments approaches that progress will be made in both the near and the long term. We are attempting to address myriad science questions, and past experience already tells us that the answer to these questions will vary with characteristics of the massive progenitors in which core collapse supernovae occur. How do massive stars explode? Which explode and which do not? Among those that explode, what elements do they produce? How do they contribute to galactic chemical evolution? And the list goes on. At present, there is no foreseeable time at which all of these questions will be addressable with Boltzmann methods, let alone quantum kinetics. An uncountable number of models will ultimately be required to understand the death of the diverse population of stars we are presented with in nature, as well as the death of any one of them. Our understanding of stellar death will not come from a single ``hero'' simulation, but from many simulations. Thus, it is in the application of both Boltzmann (classically) and moments approaches and, through this, the development of ever more realistic moments approaches that we will be able to advance our knowledge of one of the most important phenomena in the Universe. This is already clear from the modeling history to date. We have come a long way since Colgate and White's seminal work through precisely the hybrid approach discussed here. Hence, this review will focus on both approaches, as well as point to potentially efficacious hybrid approaches that could be developed and deployed in the future. \section{Design specifications} There have been many lessons learned during the 54 years that have passed since the first numerical simulations of core-collapse supernovae were performed by Colgate and White. These lessons can now be used to construct a list of design specifications for models of neutrino transport that will be used in future core-collapse supernova models: \begin{enumerate} \item Ultimately, definitive simulations of core-collapse supernovae in the classical limit will require a Boltzmann kinetic description of neutrino transport for all three flavors of neutrinos and their antineutrino partners. \item In the event sufficient evidence points to the need to consider in greater detail the impact of neutrino quantum kinetics on the supernova explosion mechanism, a quantum kinetics description of neutrino transport would be required. A classical Boltzmann description would be the natural, and required, starting point for the development of a such a quantum kinetics treatment. \item The simulations must be general relativistic. They must include special and general relativistic effects such as Doppler and red/blue shifts of neutrino energy, respectively, and angular aberration in both cases, due to fluid motion and spacetime curvature. \item These simulations must include all of the neutrino weak interactions that have been to date demonstrated to be important, and the description of the interactions must be state of the art. \item The quality of core-collapse supernova simulations will ultimately be gauged by, among other things, the degree to which lepton number and energy are conserved. More specifically, the discretizations of the integro-partial differential Boltzmann equations must conserve lepton number and energy \emph{simultaneously}. \item The discretizations of the Boltzmann equations---in particular, the collision terms---must accommodate both small- and large-energy scattering. \item The numerical methods must also accommodate realistic equations of state for the nuclear, leptonic, and photonic components. In cases where the neutrino opacities depend on the nuclear force model, the neutrino opacities and the equation of state must be consistent. \item In the interim when moments approaches to neutrino transport must be used until Boltzmann approaches become feasible, all of the above design specifications still hold. \item For moments models, the closures used must respect the Fermi--Dirac statistics of neutrinos, reflecting the fact that the neutrino distribution functions are bounded. \end{enumerate} \section{The equations of neutrino radiation hydrodynamics} In core-collapse supernova models, the stellar fluid is modeled as a perfect fluid, augmented by an equation for the electron density in order to accommodate a nuclear equation of state. (For brevity of presentation, we will not include effects due to electromagnetic fields.) The relevant equations are then \begin{align} \nabla_{\nu}J_{\mbox{\tiny B}}^{\nu} &= 0, \label{eq:BaryonMassConservation} \\ \nabla_{\nu}T_{\mbox{\tiny fluid}}^{\mu\nu} &= - G^{\mu}(f_{\nu_{e}},f_{\bar{\nu}_{e}},\ldots), \label{eq:fluidFourMomentumConservation} \\ \nabla_{\nu}J_{e}^{\nu} &= - m_{\mbox{\tiny B}}\,L(f_{\nu_{e}},f_{\bar{\nu}_{e}},\ldots), \label{eq:ElectronNumberConservation} \end{align} where the baryon rest-mass density current is \begin{equation} J_{\mbox{\tiny B}}^{\nu} = \rho\,u^{\nu}, \end{equation} where $\rho=m_{\mbox{\tiny B}}\,n_{\mbox{\tiny B}}$ is the baryon rest-mass density, $m_{\mbox{\tiny B}}$ the average baryon (rest) mass, $n_{\mbox{\tiny B}}$ the baryon density, and $u^{\nu}$ is the fluid four-velocity. The fluid energy-momentum tensor is \begin{equation} T_{\mbox{\tiny fluid}}^{\mu\nu} = \rho\,h\,u^{\mu}\,u^{\nu} + p\,g^{\mu\nu}, \end{equation} where $h=1+(e+p)/\rho$ is the specific enthalpy, $e$ the internal energy density, and $p$ the pressure. The electron density current is given by \begin{equation} J_{e}^{\nu} = \rho\,Y_{e}\,u^{\nu}, \end{equation} where $Y_{e}$ is the electron fraction. The electron density (technically electron minus positron density) is $n_{e}=\rho\,Y_{e}/m_{\mbox{\tiny B}}$. To close the system given by Eqs.~\eqref{eq:BaryonMassConservation}--\eqref{eq:ElectronNumberConservation}, the pressure $p$ is given by an equation of state (EOS); e.g., $p=p(\rho,e,Y_{e})$. The source terms on the right-hand sides of Eqs.~\eqref{eq:fluidFourMomentumConservation} and \eqref{eq:ElectronNumberConservation}, $-G^{\mu}$ and $-L$, describe four-momentum and lepton exchange between the fluid and neutrinos. These terms depend on the neutrino distribution functions (or moments of the neutrino distribution functions), as already noted in Sect.~\ref{sec:SettingTheStage} , as well as on thermodynamic properties of the stellar fluid. This nonlinear coupling is the key to the supernova mechanism, and associated observables, and is the topic of the present review. \subsection{The need for a kinetic description of neutrinos} \label{sec:needForKineticDescription} Figure \ref{fig:tmfp} shows the magnitude of the neutrino transport mean free paths for the electron neutrino, electron antineutrino, and heavy-flavor neutrinos (muon and tau neutrinos and their antineutrinos). The mean free paths are given at a time of 100 ms after bounce, during the critical shock reheating epoch, in the context of a \textsc{Chimera} supernova simulation of a $12\,M_\odot$ star. They are given as a function of radius, for select neutrino energies. Also shown are the neutrinospheres for the select energies, as well as the radius of the stalled shock wave. For all neutrino flavors and energies, the mean free paths exceed the respective neutrinosphere radii, as well as the shock radius, at some radius as we move outward. That is, the neutrino mean free paths exceed the scale of the proto-neutron star, as well as the shock radius scale, before we reach the shock radius. Under these circumstances, the neutrinos are not well described as components of the proto-neutron star fluid everywhere within it, and therefore, they are certainly not well described as a fluid in the critical heating layer between the proto-neutron star and the shock. A kinetic description of the neutrinos is required. Such a description, based on the Boltzmann kinetic equations, would supply the neutrino distributions functions, $f(r,\theta,\phi,\mu,\phi_{p},E,t)$, for each species of neutrino and antineutrino, where $\mu$ is a the direction cosine taken with respect to the outward radial direction, $\phi_{p}$ is the corresponding second angle describing the neutrino propagation direction in these momentum-space spherical polar coordinates, and $E=|p|$ is the neutrino energy. Deep in the proto-neutron star, neutrinos and the proto-neutron star fluid are in weak-interaction equilibrium. The distribution functions are then given by their equilibrium counterparts and the neutrinos are well described as an additional component of the fluid. Of course, the neutrinos fall out of weak equilibrium as the neutrinospheres are approached, and beyond them stream freely. Thus a fluid description of them would be limited to only a small portion of the simulation domain and would be of equally limited utility. The nature of the weak interactions demands the greater computational challenge and the higher computational cost of a kinetic description of neutrino transport in the proto-neutron star and above it in the cavity between it and the shock. \begin{figure}[h] \captionsetup[subfigure]{justification=centering} { \includegraphics[width=0.5\linewidth]{tmfp_nue_mag.pdf} \label{fig:tmfp_nue_mag}}~ { \includegraphics[width=0.5\linewidth]{tmfp_nuebar_mag.pdf} \label{fig:tmfp_nuebar_mag}}\\ { \includegraphics[width=0.5\linewidth]{tmfp_nux_mag.pdf} \label{fig:tmfp_nux_mag}}~ { \includegraphics[width=0.5\linewidth]{tmfp_nuxbar_mag.pdf} \label{fig:tmfp_nuxbar_mag}}~ \caption{Plots of the neutrino and antineutrino mean free paths at 100 ms after bounce, during the neutrino shock reheating epoch, for all three flavors of neutrinos at select energies. The upper left and right panels show plots of the electron-neutrino and anti-neutrino mean free paths, respectively. The lower left and right panels show plots of the heavy-flavor ($\mu$ and $\tau$) neutrinos and antineutrinos, respectively. The data used to generate the plots are taken from a supernova model beginning with a $12\,M_\odot$ progenitor and evolved with the \textsc{Chimera} supernova code. To set the correct physical scale against which the mean free paths can be compared, we indicate the location of the various neutrinospheres and the shock wave. All four plots demonstrate that, as we move out in radius to lower densities, all of the mean free paths plotted vary from being much less than to much greater than the neutrinosphere radii---i.e., to the characteristic spatial scale of the proto-neutron star. Consequently, the neutrinos will not behave in a fluid-like manner everywhere, and a kinetic rather than a fluid description of them is necessary.} \label{fig:tmfp} \end{figure} \subsection{The choice of phase-space coordinates} The expansion from the four dimensions of spacetime to the seven dimensions of relativistic phase space brings with it additional choices. Now, in addition to making what will hopefully be optimal choices for spacetime coordinates, we will also need to consider optimal choices for momentum-space coordinates. And this is not without some give and take. Simplification in some respects afforded by one choice is always accompanied by complexification in other respects. There is, however, an overarching consideration that guides the typical choice made by most modelers: Neutrino--matter interactions are most naturally and, consequently, most easily described in the frame of reference of the inertial observer instantaneously comoving with the fluid. (The fluid is accelerating, but the instantaneously comoving observer is not.) In this frame, the matter is instantaneously at rest, and the neutrino four-momentum components that enter the expressions for the neutrino weak interaction rates are the components measured by the comoving observer. However, while the description of neutrino--matter interactions are simplified in this picture, the choice to use four-momenta measured by instantaneously comoving observers introduces additional terms on the left-hand side of the Boltzmann equation that correspond to relativistic angular aberration and Doppler shift due to the fact that two spatially-adjacent instantaneously-comoving observers do not necessarily have the same velocity---in general, they will measure different neutrino angles of propagation and energies. In the context of Newtonian gravity, this would certainly add considerable complexity to the left-hand side of the Boltzmann equation. But in the general relativistic case, such momentum-space advection terms that involve derivatives with respect to the neutrino angles of propagation (or their direction cosines) and the neutrino energy are already there in light of general relativistic angular aberration and frequency shift in curved spacetime. While the character of the physical effects---special versus general relativistic---is different and, as such, presents different numerical challenges, the relative additional complexity of adding terms corresponding to special relativistic effects---e.g., relativistic Doppler shift and angular aberration---to the left-hand side of the Boltzmann equation versus the significant simplification of the collision term when comoving-frame neutrino four-momenta are used has led most modelers to choose comoving frame neutrino four momenta as phase-space coordinates. With regard, then, to the difficulties associated with the terms/effects added to the advection of neutrinos in phase-space, as we will see in this review, very different numerical approaches have been taken to describe them. In what follows, we will adopt the following notation: We will designate the neutrino four-momentum components measured by an inertial observer instantaneously comoving with the fluid as $p^{\hat{\mu}}$. Neutrino four-momentum components measured by an Eulerian observer will be designated as $p^{\bar{\mu}}$. Finally, the neutrino four-momentum components in the coordinate basis will be designated as $p^{\mu}$. \subsection{The general relativistic Boltzmann equation} \label{sec:GeneralRelativisticBoltzmannEquation} In light of the need to conserve simultaneously both energy and lepton number, we wish to begin with a version of the Boltzmann equation that is \emph{manifestly} conservative across all phase-space dimensions. As we will show, this is not true of the standard formulation of the general relativistic Boltzmann equation. In this section, we outline the derivation of both as presented by \cite{CaMe03} to illustrate the differences and, of course, to arrive at a form of the Boltzmann equation that is better suited to numerical application. Before we begin, we emphasize the following: While spacetime is endowed with a natural metric, $g_{{\mu}{\nu}}$, which is determined by Einstein's equations given the stress--energy content of spacetime, phase space is not. Consequently, the development of general relativistic neutrino radiation hydrodynamics \emph{requires} the full machinery of the metric-free language of the differential and integral calculus of forms. That is, the derivation we present below is not a matter of taste. Treatments of non-relativistic kinetic theory typically assume that phase space is endowed with a Euclidean metric. This can serve as a bookkeeping device at best, and it is important to interpret the theory accordingly. The one-particle phase space for particles of arbitrary mass is an eight-dimensional space, which we label $M$, of spacetime position $x$ and four-momentum $p$. If we specify a mass for the particle, $m$, which satisfies \begin{equation} m^2=-g_{\mu\nu}p^{\mu}p^{\nu}, \end{equation} we confine ourselves to a hypersurface of $M$, which we write as $M_m$, which is the phase space for particles of mass $m$. The flow in $M_m$ defined by the particle trajectories $(x,p)$ is generated by the Liouville operator \begin{equation} {{L_m}={p^{\overvar{\mu }{"705E }}}{{{{{\cal L} }^{\mu }} }_{\overvar{\mu }{"705E }}}\frac{\partial }{\partial {x^{\mu }}} -{{{{\Gamma }^{\overvar{i}{"705E }}} }_{\overvar{\nu }{"705E }\overvar{\rho }{"705E }}}{p^{\overvar{\nu }{"705E }}}{p^{\overvar{\rho }{"705E }}} \frac{\partial }{\partial {p^{\overvar{i}{"705E }}}}}. \label{liouville} \end{equation} $\cal L^{\hat{\mu}}_{\mu}$ is the composite transformation that takes us, first, from the coordinate basis to the orthonormal frame of the Eulerian observer at rest with respect to the ``laboratory'' and, second, via a Lorentz transformation, from the Eulerian frame to the frame of reference comoving with the stellar core fluid: \begin{equation} {\cal L}^{\hat{\mu}}_{\mu} = \Lambda^{\hat{\mu}}_{\bar{\mu}}e^{\bar{\mu}}_{\mu}. \label{eq:Ltrans} \end{equation} $\cal L^{\mu}_{\hat{\mu}}$ is the inverse transformation. $\Gamma^{\hat{\mu}}_{\hat{\mu}\hat{\nu}}$ are the Ricci Rotation Coefficients and are given by \begin{equation} {\Gamma^{\hat{\mu}}}_{\hat{\nu}\hat{\rho}} = {\cal L^{\hat{\mu}}}_{{\mu}} {\cal L^\nu}_{\hat{\nu}} {\cal L^\rho}_{\hat{\rho}} \,{\Gamma^\mu}_{\nu\rho} + {\cal L^{\hat{\mu}}}_{{\mu}} {\cal L^\rho}_{\hat{\rho}} \frac{\partial {\cal L^\mu}_{\hat{\nu}}}{\partial x^\rho}, \label{eq:ConnectionComoving} \end{equation} where $\Gamma^{\mu}_{\nu\rho}$ are the Levi-Civita connection coefficients corresponding to the spacetime metric $g_{\mu\nu}$. For a given type of particle of mass $m$, the distribution function, $f$, gives the density of such particles in phase space. An equation for the distribution function, the Boltzmann equation, is derived by considering a closed six-dimensional hypersurface $\partial D$ bounding a region $D$ in $M_m$. The net number of particles flowing through the boundary of $D$ is given by the generalized Stokes' Theorem \begin{equation} {{N}[\partial D]=\int_{\partial D} f \omega =\int_D d(f \omega ),}\label{stokes} \end{equation} where the infinitesimal surface element $\omega$ normal to the flow across $D$ is given by \begin{equation} \omega={L_m}\cdot \Omega \end{equation} and $\Omega$ is an infinitesimal volume element in $M_m$. The product rule gives \begin{equation} {d(f \omega )={df}\wedge \omega ={df}\wedge ({L_m}\cdot \Omega ),}\label{almostLiouville} \end{equation} where we have used the fact that $d\omega=0$ (an expression of the general relativistic Liouville's Theorem that tells us that the phase-space flow is incompressible). But $f$, $L_m$, and $\Omega$ obey the identity \begin{equation} {{df}\wedge (L_m\cdot \Omega )=L_m[f]\Omega .} \end{equation} Then \begin{equation} {{N}[\partial D]=\int_D {L_m}[f]\Omega .} \label{almostBoltzmann} \end{equation} Finally, the number of particles crossing the boundary $\partial D$ of $D$ in $M_m$ is given by the change in the number of particles in $D$ due to emission, absorption, and scattering. Defining the ``collision term,'' $\mathcal{C}[f]$, as the spacetime density of such events, we have \begin{equation} {{N}[\partial D]=\int_D \mathcal{C}[f]\Omega ,} \label{almostBoltzmann} \end{equation} and \begin{equation} {{L_m}[f]=\mathcal{C}[f].} \end{equation} Substituting for $L_m$ using Eq.~\eqref{liouville}, we arrive at the Boltzmann equation in ``standard'' form: \begin{equation} {{p^{\overvar{\mu }{"705E }}}{{{{{\cal L} }^{\mu }} }_{\overvar{\mu }{"705E }}}\frac{\partial f}{\partial {x^{\mu }}} -{{{{\Gamma }^{\overvar{j}{"705E }}} }_{\overvar{\nu }{"705E }\overvar{\rho }{"705E }}}{p^{\overvar{\nu }{"705E }}}{p^{\overvar{\rho }{"705E }}} \frac{\partial {u^{\overvar{i}{"705E }}}}{\partial {p^{\overvar{j}{"705E }}}}\frac{\partial f}{\partial {u^{\overvar{i}{"705E }}}}=\mathcal{C}[f].}\label{fullBoltzmann} \end{equation} Note that to obtain the Boltzmann equation, we had to consider integration on our phase-space manifold $M_m$ on which there is no natural metric. This \emph{necessitates} the use of the language of differential forms. If we integrate over momentum space, we obtain the balance equation for particle number \begin{equation} {\frac{1}{{\sqrt{-g}}}\frac{\partial }{\partial {x^{\mu }}}\big({\sqrt{-g}}{N^{\mu }}\big)=\int \mathcal{C}[f]{{\pi }_m},}\label{numberConservation} \end{equation} where \begin{equation} {{N^{\mu }(x)}=\int {{{{f {\cal L} }^{\mu }} }_{\overvar{\mu }{"705E }}}{p^{\overvar{\mu }{"705E }}} {{\pi }_m}=\int f{p^{\mu }} {{\pi }_m}} \label{numberVector} \end{equation} is the particle 4-current density and \begin{equation} {{{\pi }_m}=\frac{1}{E({\bf p})}\Bigg|\det \big[\frac{\partial {\bf p}}{\partial {\bf u}}\big]\Bigg|{{{du}}^{\overvar{1}{"705E }\overvar{2}{"705E }\overvar{3}{"705E }}}}\label{momentumElement} \end{equation} is the invariant momentum-space 3-volume expressed in terms of the spherical momentum-space coordinates: $u^{\hat{i}}=(E=||p||/c,\mu\equiv\cos\theta_{p},\phi_{p})$. But in light of the fact that the Boltzmann equation is not expressed in manifestly conservative form it is not obvious how we arrive at Eq.~\eqref{numberConservation} by integrating over momentum space. We desire to reexpress the Boltzmann equation in terms of spacetime and momentum-space divergences so that it is manifestly conservative with respect to an integration over a spacetime region, a momentum-space region, or both---i.e., a phase-space region. Of course, the generalized Stokes' Theorem, Eq.~\eqref{stokes}, is an expression of manifest conservation, equating the change in a quantity within a volume of phase space in terms of a surface term involving its flux on the volume's boundary. The key insight by \citet{CaMe03} was to recognize that the total exterior derivative $d(f\omega)$ in Eq.~\eqref{stokes} can instead be expressed as \begin{equation} {d(f \omega )=\mathcal{N}[f]\Omega ,} \label{conservativeNumberOperator} \end{equation} where \begin{eqnarray} \mathcal{N}[f]&\equiv&\frac{1}{{\sqrt{-g}}}\frac{\partial }{\partial {x^{\mu }}}\big({\sqrt{-g}}{{{{{\cal L} }^{\mu }} }_{\overvar{\mu }{"705E }}} {p^{\overvar{\mu }{"705E }}}f \big) -\nonumber \\ & & E({\bf p})\Bigg|\det \big[\frac{\partial {\bf p}}{\partial {\bf u}}\big]{{\Bigg|}^{-1}} \frac{\partial }{\partial {u^{\overvar{i}{"705E }}}}\Bigg(\frac{1}{E({\bf p})}\Bigg|\det \big[\frac{\partial {\bf p}}{\partial {\bf u}}\big]\Bigg| {{{{\Gamma }^{\overvar{j}{"705E }}} }_{\overvar{\nu }{"705E }\overvar{\rho }{"705E }}}{p^{\overvar{\nu }{"705E }}}{p^{\overvar{\rho }{"705E }}} \frac{\partial {u^{\overvar{i}{"705E }}}}{\partial {p^{\overvar{j}{"705E }}}}f\Bigg).\label{conservativeNumberOperator2} \end{eqnarray} Substituting Eq.~\eqref{conservativeNumberOperator} in Eq.~\eqref{stokes} and using Eq.~\eqref{almostBoltzmann}, we arrive at \begin{eqnarray} \label{consBE} & &\frac{1}{{\sqrt{-g}}}\frac{\partial }{\partial {x^{\mu }}}\big({\sqrt{-g}}{{{{{\cal L} }^{\mu }} }_{\overvar{\mu }{"705E }}} {p^{\overvar{\mu }{"705E }}}f \big) -\nonumber \\ & & E({\bf p})\Bigg|\det \big[\frac{\partial {\bf p}}{\partial {\bf u}}\big]{{\Bigg|}^{-1}} \frac{\partial }{\partial {u^{\overvar{i}{"705E }}}}\Bigg(\frac{1}{E({\bf p})}\Bigg|\det \big[\frac{\partial {\bf p}}{\partial {\bf u}}\big]\Bigg| {{{{\Gamma }^{\overvar{j}{"705E }}} }_{\overvar{\nu }{"705E }\overvar{\rho }{"705E }}}{p^{\overvar{\nu }{"705E }}}{p^{\overvar{\rho }{"705E }}} \frac{\partial {u^{\overvar{i}{"705E }}}}{\partial {p^{\overvar{j}{"705E }}}}f\Bigg)\label{conservativeNumberOperator2} \\ \nonumber & & = \mathcal{C}[f], \end{eqnarray} which is the manifestly conservative formulation of the Boltzmann equation. It is now obvious that upon integration over momentum space, for example, the momentum derivative terms on the left-hand side of the Boltzmann equation in Eq.~\eqref{consBE} will give rise only to surface terms. The counterpart equation for 4-momentum conservation can be derived in the same way \citep{CaMe03} and is given by \begin{eqnarray} & & \frac{1}{{\sqrt{-g}}}\frac{\partial }{\partial {x^{\nu }}}\big({\sqrt{-g}}{{{\cal T} }^{{\mu \nu }}}\big) - \nonumber \\ & & {E({\bf p})}\Bigg|\det \big[\frac{\partial {\bf p}}{\partial {\bf u}}\big]{{\Bigg|}^{-1}} \frac{\partial }{\partial {u^{\overvar{i}{"705E }}}}\Bigg(\frac{1}{{E({\bf p})}}\Bigg|\det \big[\frac{\partial {\bf p}}{\partial {\bf u}}\big]\Bigg| {{{{\Gamma }^{\overvar{j}{"705E }}} }_{\overvar{\nu }{"705E }\overvar{\rho }{"705E }}}{p^{\overvar{\rho }{"705E }}} \frac{\partial {u^{\overvar{i}{"705E }}}}{\partial {p^{\overvar{j}{"705E }}}}{{{{{\cal L} }^{\overvar{\nu }{"705E }}} }_{\nu }}{{{\cal T} }^{{\mu \nu }}}\Bigg) \nonumber\\ & & = - {{{{\Gamma }^{\mu }} }_{{\nu \rho }}}{{{\cal T} }^{{\nu \rho }}} + {{{{{\cal L} }^{\mu }} }_{\overvar{\mu }{"705E }}}{p^{\overvar{\mu }{"705E }}}\mathbb{C}[f],\label{eq:fourMomentumConservativeBoltzmann} \end{eqnarray} where \begin{equation} {{{{\cal T} }^{{\mu \nu }}}\equiv {{{{{\cal L} }^{\mu }} }_{\overvar{\mu }{"705E }}}{{{{{\cal L} }^{\nu }} }_{\overvar{\nu }{"705E }}}{p^{\overvar{\mu }{"705E }}}{p^{\overvar{\nu }{"705E }}}f} \end{equation} is the specific particle stress-energy tensor. As an illustrative example, we specialize Eq.~\eqref{consBE} to the case of spherical symmetry, Lagrangian coordinates, and $\mathcal{O}(v/c)$ transport, as in \citet{MeBr93a,MeBr93b,MeBr93c}. As shown by Cardall and Mezzacappa, Eq.~\eqref{consBE} reduces to \begin{eqnarray} & & {\partial \over \partial t}\left(f\over\rho\right) + {\partial\over\partial m}\left(4\pi r^2\rho\mu\, {f\over\rho}\right) + {1\over\epsilon^2}{\partial\over\partial\epsilon} \left(\epsilon^3\left[\mu^2\left({3 v\over r} + {\partial\ln\rho\over\partial t}\right)-{v\over r}\right]{f\over\rho}\right) \nonumber \\ & & + {\partial \over\partial\mu}\left( (1-\mu^2)\left[{1\over r}+\mu\left({3 v\over r}+ {\partial\ln\rho\over\partial t}\right) \right]{f\over\rho}\right) ={1\over\rho\, \epsilon}\,\mathcal{C}[f], \label{numberConservativeBoltzmann} \end{eqnarray} in agreement with the conservative formulation of the Boltzmann equation used in \citet{MeBr93a,MeBr93b,MeBr93c}. In spherical symmetry and to $\mathcal{O}(v/c)$ one can arrive at a manifestly conservative form of the Boltzmann equation through trial and error. However, in three dimensions and with full general relativity, such trial and error approaches are doomed to failure. A manifestly conservative starting point becomes paramount. \subsection{The 3+1 formulation of general relativity} The fundamental building blocks of the ``3+1'' formulation of general relativity are the spacelike hypersurfaces corresponding to surfaces of constant $\tau$, where $\tau$ is some scalar function of the spacetime coordinates $x^\mu$: $\tau=\tau(x^0,x^1,x^2,x^3)$. It is natural to choose $\tau$ to be $x^0=t$. The spacelike hypersurfaces, $\Sigma_t$, are threaded by a timelike congruence of constant-spatial-coordinate curves. The points of constant $x^i(t)$ between two hypersurfaces separated by $dt$ are connected by the four-vector $t$. At each point of the hypersurface $\Sigma_t$, there is a unit timelike normal four-vector $n$ satisfying $n_\mu n^\mu = -1$. $n$ corresponds to the four-velocity of the observer at rest with respect to the hypersurface. This is the generalization of the definition of the Eulerian observer familiar from non-relativistic formulations. The four-vector $\beta$, known as the ``shift'' vector, describes how the spatial coordinates move within each hypersurface. The proper time between two hypersurfaces $\Sigma_t$ and $\Sigma_{t+dt}$ is given by $\alpha dt$. $\alpha$ is known as the ``lapse'' function. Given such a foliation of spacetime and such a coordinatization, the squared spacetime line element becomes \begin{equation} ds^{2} = - (\alpha^{2}-\beta_{i}\beta^{i})dt^{2} + 2\beta_{i}dx^{i}dt+\gamma_{ij}dx^{i}dx^{j}, \label{3+1metric} \end{equation} where $\gamma_{ij}$ is the metric on the hypersurface $\Sigma_t$. From Eq.~\eqref{3+1metric}, the spacetime metric can be read off as \begin{equation} g_{\mu\nu} = \left( \begin{array}{cc} -\alpha^{2}+\beta_{i}\beta^{i} & \beta_{i} \\ \beta_{i} & \gamma_{ij} \end{array} \right), \end{equation} whose determinant $g$ can be computed directly to find $\sqrt{-g}=\alpha\,\sqrt{\gamma}$, where $\gamma$ is the determinant of the spatial metric. In addition to the intrinsic geometry---specifically, the intrinsic curvature---of each spacelike hypersurface, which is determined by the metric $\gamma_{ij}$, we describe how such a hypersurface is embedded in the four-dimensional spacetime by its extrinsic curvature, $\mathsf{K}_{ij}$, which is related to the three-metric by \begin{equation} \partial_{t}\gamma_{ij}=-2\alpha \mathsf{K}_{ij}+D_{i}\beta_{j}+D_{j}\beta_{i}. \label{eq:extrinsiccurvature} \end{equation} Here $D_{i}$ corresponds to the covariant derivative on $\Sigma_{t}$ corresponding to the Levi--Civita connection associated with $\gamma_{ij}$. We can regard the coordinates of this formulation as the metric components, $\gamma_{ij}$, and the components of the extrinsic curvature, $\mathsf{K}_{ij}$, as the velocities. The dynamics is supplied by the Einstein equations, which provide the following evolution equations for the six independent components of $\mathsf{K}_{ij}$: \begin{eqnarray} \label{eq:extcurvevolequation} \partial_{t}\mathsf{K}_{ij} & = & -D_{i}D_{j}\alpha +\beta^{k}\partial_{k}\mathsf{K}_{ij}+\mathsf{K}_{ik}\partial_{j}\beta^{k}+\mathsf{K}_{kj}\partial_{i}\beta^{k} \\ \nonumber & + & \alpha \left( ^{(3)}R_{ij}+\mathsf{K}\mathsf{K}_{ij}-2\mathsf{K}_{ik}\mathsf{K}^{k}_{j} \right) +4\pi\alpha [\gamma_{ij} (S-E) - 2S_{ij}], \nonumber \end{eqnarray} where $\mathsf{K}$ is the trace of the extrinsic curvature tensor, and $^{(3)}R_{ij}$ is the Ricci curvature tensor for the spacelike hypersurface. The source terms in Eq.~\eqref{eq:extcurvevolequation} are given in terms of the stress--energy tensor, $T_{\alpha\beta}$, by \begin{eqnarray} \label{eq:sourceterms} S_{\mu\nu} & = & \gamma^{\alpha}_{\mu}\gamma^{\beta}_{\nu}T_{\alpha\beta}, \\ S_{\mu} & = & -\gamma^{\alpha}_{\mu}n^{\beta}T_{\alpha\beta}, \\ S & = & S^{\mu}_{\mu}, \\ E & = & n^{\alpha}n^{\beta}T_{\alpha\beta}, \end{eqnarray} \noindent where \begin{equation} n^{\mu} = \f{1}{\alpha}(1,-\beta^{i}) \quad\text{with}\quad n_{\mu} = (-\alpha,0) \end{equation} and \begin{equation} \gamma^{\alpha}_{\hspace{4pt}\mu}=\delta^{\alpha}_{\hspace{4pt}\mu}+n^{\alpha}\,n_{\mu} \end{equation} provide timelike and spacelike projections, respectively. While not drawn here, there is a corresponding spacelike hypersurface to which the fluid four-velocity \begin{equation} u^{\mu} = W\,(\,n^{\mu}+v^{\mu}\,) \label{eq:fluidFourVelocityEulerian} \end{equation} is the unit timelike normal, which defines the timelike basis element of the orthonormal frame of reference of the inertial observer instantaneously comoving with the fluid and at rest with respect to the hypersurface. This is our generalized Lagrangian observer in this formalism. The projection into the slice defined by the normal $u^{\mu}$ is given by \begin{equation} h^{\alpha}_{\hspace{4pt}\mu}=\delta^{\alpha}_{\hspace{4pt}\mu}+u^{\alpha}\,u_{\mu}. \label{eq:projectorLagrangian} \end{equation} Here, $W=-n_{\mu}u^{\mu}$ is the Lorentz factor and $v^{\mu}=(\gamma^{\mu}_{\hspace{4pt}\nu}u^{\nu})/W$ the fluid three-velocity. \subsection{3+1 general relativistic hydrodynamics} \label{sec:hydrodynamics3p1} The 3+1 slicing of spacetime allows us to formulate the radiation-hydrodynamics equations in a form suitable for numerical solution. Here we briefly summarize the 3+1 form of the hydrodynamics equations given by Eqs.~\eqref{eq:BaryonMassConservation}--\eqref{eq:ElectronNumberConservation} (see, e.g., \citealt{Anile89,ReZa13} for details). The mass conservation equation (cf.\ Eq.~\eqref{eq:BaryonMassConservation}) becomes \begin{equation} \f{1}{\alpha\sqrt{\gamma}} \big[\, \pd{}{t}\big(\,\sqrt{\gamma}\,D\,\big) + \pd{}{i}\big(\,\sqrt{\gamma}\,D\,\big[\,\alpha\,v^{i}-\beta^{i}\,\big]\,\big) \,\big] =0, \label{eq:BaryonMassConservation3p1} \end{equation} where $D=W\,\rho$, while the electron number conservation equation (cf.\ Eq.~\eqref{eq:ElectronNumberConservation}) becomes \begin{equation} \f{1}{\alpha\sqrt{\gamma}} \big[\, \pd{}{t}\big(\,\sqrt{\gamma}\,D\,Y_{e}\,\big) + \pd{}{i}\big(\,\sqrt{\gamma}\,D\,Y_{e}\,\big[\,\alpha\,v^{i}-\beta^{i}\,\big]\,\big) \,\big] =-m_{\mbox{\tiny B}}\,L. \label{eq:ElectronNumberConservation3p1} \end{equation} Conservative forms of the energy and momentum equations are derived by decomposing Eq.~\eqref{eq:fluidFourMomentumConservation} into components relative to the spatial hypersurface. The energy equation becomes \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\, \pd{}{t}\big(\,\sqrt{\gamma}\,\tau_{\mbox{\tiny fluid}}\,\big) +\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,(S^{i}-D\,v^{i})-\tau_{\mbox{\tiny fluid}}\,\beta^{i}\,\big]\,\big) \,\big] \nonumber \\ &\hspace{12pt} =\f{1}{\alpha}\,\big[\,\alpha\,S^{ik}\,\mathsf{K}_{ik}-S^{i}\pd{\alpha}{i}\,\big]+n_{\mu}\,G^{\mu}, \label{eq:fluidEnergyEquation3p1} \end{align} where $\tau_{\mbox{\tiny fluid}}=E-D$, $E=\rho\,h\,W^{2}-p$, $S^{\mu}=\rho\,h\,W^{2}\,v^{\mu}$, and $S^{\mu\nu}=\rho\,h\,W^{2}\,v^{\mu}\,v^{\nu}+p\,\gamma^{\mu\nu}$, while the momentum equation is given by \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\, \pd{}{t}\big(\,\sqrt{\gamma}\,S_{j}\,\big) +\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,S^{i}_{\hspace{2pt}j}-\beta^{i}\,S_{j}\,\big]\,\big) \,\big] \nonumber \\ &\hspace{12pt} =\f{1}{\alpha}\,\big[\,S_{i}\,\pd{\beta^{i}}{j}+\f{1}{2}\,\alpha\,S^{ik}\pd{\gamma_{ik}}{j}-E\,\pd{\alpha}{j}\,\big] - \gamma_{j\mu}\,G^{\mu}. \label{eq:fluidMomentumEquation3p1} \end{align} The source terms modeling lepton and four-momentum exchange due to neutrino--matter interactions ($-L$, $-n_{\mu}\,G^{\mu}$, and $\gamma_{j\mu}\,G^{\mu}$, respectively) will be discussed in detail in Sect.~\ref{sec:interactions}. \subsection{The 3+1 general relativistic Boltzmann equation} The general relativistic Boltzmann equation in both conservative form and using the spacetime coordinates associated with the 3+1 decomposition of spacetime was derived by \citet{CaEnMe13b}. Essential to the derivation is the recognition that the composite transformation $L^{\mu}_{\hat{\mu}}$ can be viewed as the coordinate basis components ($\mu$) of the element of the tetrad of four-vectors ($\hat{\mu}$) corresponding to the frame carried by the observer instantaneously comoving with the fluid. The Eulerian decomposition of $L^{\mu}_{\hat{\mu}}$ into timelike and spacelike components is \begin{equation} L^{\mu}_{\hat{\mu}}={\cal L}_{\hat{\mu}}n^{\mu}+l^{\mu}_{\hat{\mu}}, \label{eq:tetraddecomposition} \end{equation} where ${\cal L}_{\hat{\mu}}$ is the coefficient of the timelike component of the tetrad element (four-vector) designated by $\hat{\mu}$, and $l^{\mu}_{\hat{\mu}}$ is the spacelike component of this tetrad element. Explicit expressions for ${\cal L}_{\hat{\mu}}$ and $l^{\mu}_{\hat{\mu}}$ can be found in \citep{CaEnMe13b}. The Ricci Rotation Coefficients can be expressed as \begin{equation} \Gamma^{\hat{\rho}}_{\hat{\nu}\hat{\mu}}= L^{\hat{\rho}}_{\nu} L^{\mu}_{\hat{\mu}}\nabla_{\mu} L^{\nu}_{\hat{\nu}}. \label{eq:riccirotcoeffdecomp} \end{equation} Using the decomposition (\ref{eq:tetraddecomposition}), we are left with three terms to evaluate: \begin{equation} L^{\hat{\rho}}_{\nu} L^{\mu}_{\hat{\mu}} \left( {\cal L}_{\hat{\nu}}\nabla_{\mu}n^{\nu} +n^{\nu}\nabla_{\mu}{\cal L}_{\hat{\nu}} +\nabla_{\mu}l^{\nu}_{\hat{\nu}} \right). \label{eq:threeterms} \end{equation} The results can be found in \citet{CaEnMe13b}. With the decomposition of the momentum-space transformation matrix $P^{\tilde{i}}_{\hat{i}}$ into elements parallel and perpendicular to the three-momentum $p^{\hat{i}}$, \begin{equation} P^{\tilde{i}}_{\hat{i}} = \frac{Q^{\tilde{i}}p_{\hat{i}}}{p} + U^{\tilde{i}}_{\hat{i}}, \label{eq:momentumdecomp} \end{equation} with \begin{eqnarray} \label{eq:momentumdecomp2} Q^{\tilde{i}} & = & \frac{P^{\tilde{i}}_{\hat{i}} p^{\hat{i}}}{p}, \\ p & = & \sqrt{p^{\hat{i}}p_{\hat{i}}}, \\ U^{\tilde{i}}_{\hat{i}} & = & P^{\tilde{i}}_{\hat{j}} k^{\hat{j}}_{\hat{i}}, \\ k^{\hat{j}}_{\hat{i}} & = & \delta^{\hat{j}}_{\hat{i}} + \frac{p^{\hat{j}}p_{\hat{i}}}{p^2}. \end{eqnarray} The 3+1 general relativistic Boltzmann equation can now be written as \begin{equation} S_N + M_N = C[f], \label{eq:ConservativeCovariant} \end{equation} where the spacetime divergence is \begin{equation} S_N = \frac{\left( -p_{\hat 0} \right)}{\alpha\sqrt{\gamma}}\left[ \frac{\partial \left(D_N \right)}{\partial t} + \frac{\partial \left( F_N \right)^i }{\partial x^i} \right], \label{eq:Spacetime_N_31} \end{equation} with \begin{eqnarray} D_N &=& \frac{\sqrt{\gamma}}{\left( -p_{\hat 0} \right)} \, \mathcal{L}_{\hat\mu} \, p^{\hat\mu} f, \label{eq:Density_N} \\ \left( F_N \right)^i &=& \frac{\sqrt{\gamma}}{\left( -p_{\hat 0} \right)} \left( \alpha\, {\ell^i}_{\hat\mu} - \beta^i \mathcal{L}_{\hat\mu} \right) p^{\hat\mu} f. \label{eq:Flux_N} \end{eqnarray} $D_N$ and $\left( F_N \right)^i $ are, respectively, the conserved number density and number flux. The momentum-space divergence, $M_N$, can be expressed as \begin{eqnarray} M_N &=& \frac{1}{\alpha\sqrt{\gamma}} \frac{\left( -p_{\hat 0} \right)}{\sqrt{\lambda}} \frac{\partial}{\partial p^{\tilde\imath}} \left\{ \sqrt{\lambda} \, \frac{Q^{\tilde\imath} \left(-p_{\hat 0}\right)}{p}\! \left[ \left( R_N \right)^{\hat 0} + \left( O_N \right)^{\hat 0} \right] \right. \nonumber \\ && \left. + \sqrt{\lambda} \, {U^{\tilde \imath}}_{\hat \imath} \left[ \left( R_N \right)^{\hat \imath} + \left( O_N \right)^{\hat \imath} \right] \right\}, \label{eq:Momentum_N_31} \end{eqnarray} where \begin{eqnarray} \left( R_N \right)^{\hat\rho} &=& \frac{\alpha\sqrt{\gamma}}{\left( -p_{\hat 0} \right)}\, p^{\hat\nu} p^{\hat\mu} f \nonumber \\ & & \times \left[ \mathcal{L}^{\hat\rho}\, {\ell^j}_{\hat\nu} \left( \frac{ \mathcal{L}_{\hat\mu} }{\alpha} \frac{\partial \alpha}{\partial x^j} - {\ell^k}_{\hat\mu} \, \mathsf{K}_{jk} \right) \right. \nonumber \\ & & \left. - {\ell^{\hat\rho j}} \! \left( \! \frac{\mathcal{L}_{\hat\nu} \mathcal{L}_{\hat\mu} }{\alpha} \frac{\partial \alpha}{\partial x^j} \!-\! \frac{\ell_{k\hat\nu} \, \mathcal{L}_{\hat\mu}}{\alpha} \frac{\partial \beta^k}{\partial x^j} \!-\! \frac{{\ell^k}_{\hat\nu} \,{\ell^i}_{\hat\mu}}{2} \frac{\partial \gamma_{ki} }{\partial x^j} \!\right) \!\right]\! \label{eq:Redshift_N} \end{eqnarray} describes momentum shifts (i.e., redshift and angular aberration in momentum-space spherical coordinates) due to gravity as embodied in the spacetime geometry, \begin{eqnarray} \left( O_N \right)^{\hat\rho} &=& \frac{\sqrt{\gamma}}{\left( -p_{\hat 0} \right)} \, p^{\hat\nu} p^{\hat\mu} f \nonumber \\ & & \times \left\{ \mathcal{L}^{\hat\rho} \left[ \mathcal{L}_{\hat\mu} \frac{\partial \mathcal{L}_{\hat\nu}}{\partial t} + \left( \alpha\, {\ell^j}_{\hat\mu} - \beta^j \mathcal{L}_{\hat\mu}\right) \frac{\partial \mathcal{L}_{\hat\nu}}{\partial x^j} \right] \right. \nonumber \\ & & \left. - \ell^{\hat\rho k} \left[ \mathcal{L}_{\hat\mu} \frac{\partial \ell_{k \hat\nu}}{\partial t} + \left(\alpha\, {\ell^j}_{\hat\mu} - \beta^j \mathcal{L}_{\hat\mu}\right) \frac{\partial \ell_{k \hat\nu}}{\partial x^j} \right] \right\} \label{eq:Observer_N} \end{eqnarray} are `observer corrections' due to the acceleration of the fluid and, consequently, changing comoving observers with different velocities (and partially entangled with the geometry as well), and \begin{equation} \sqrt{\lambda}=\Bigg|\det \big[\frac{\partial {\bf p}}{\partial {\bf u}}\big]\Bigg| . \end{equation} \subsection{Multi-frequency moment kinetics and the closure problem} \label{sec:MomentKineticsAndClosure} Because of the prohibitively high computational cost associated with solving the Boltzmann equation with sufficient phase-space resolution, most (all in three spatial dimensions) supernova models to date employ a moments approach to neutrino transport. In the moments approach, one solves for a finite number of moments of the distribution function (instead of the distribution function), and the hierarchy of moment equations is closed by a closure procedure, relating higher-order moments to the evolved lower-order moments. The basic idea of the moments approach can be illustrated by considering the Boltzmann equation in one spatial dimension \begin{equation} \pd{f}{t}+\mu\pd{f}{x} = \chi\,(f_{0}-f) + \sigma\,(\langle f \rangle-f), \label{eq:boltzmannSimple} \end{equation} where, for simplicity, we let the distribution function depend on spatial position, $x$, momentum-space angle cosine, $\mu$, and time, $t$. $\chi$ is the absorption opacity, $f_{0}$ is the isotropic equilibrium distribution, and $\sigma$ is the scattering opacity due to isotropic and isoenergetic scattering. A finite number ($N+1$) of angular moments of the distribution function can be formed as weighted integrals over angle: \begin{equation} m^{(k)}(x,t)=\langle\,f,\,\mu^{k}\,\rangle\equiv\f{1}{2}\int_{-1}^{1}f(\mu,x,t)\,\mu^{k}\,d\mu,\quad k=0,1,\ldots,N. \label{eq:momentsSimple} \end{equation} Thus, in a truncated moments approach the distribution function is approximated by the moments vector \begin{equation} \mathbf{m}_{N}=\big(\,m^{(0)},m^{(1)},\ldots,m^{(N)}\,\big)^{T} \end{equation} so that \begin{equation} f(\mu,x,t) \approx \sum_{k=0}^{N}c^{(k)}\,m^{(k)}(x,t)\,\mu^{k}, \label{eq:expansionSimple} \end{equation} where $c^{(k)}$ are normalization constants. Similarly, by taking moments of the Boltzmann equation in Eq.~\eqref{eq:boltzmannSimple}, the hierarchy of moment equations is given by \begin{align} \pd{m^{(0)}}{t}+\pd{m^{(1)}}{x} &=\chi\,(f_{0} - m^{(0)}), \label{eq:zerothMomentEquationSimple} \\ \pd{m^{(k)}}{t}+\pd{m^{(k+1)}}{x} &= \chi\,(\langle f_{0},\mu^{k} \rangle-m^{(k)}) + \sigma\,(m_{0}^{(k)}-m^{(k)}), \quad\text{for}\quad k>0, \label{eq:higherMomentEquationSimple} \end{align} where on the right-hand sides we have defined \begin{equation} \langle f_{0},\mu^{k} \rangle=f_{0}\,\f{[1+(-1)^{k}]}{2\,(k+1)} \quad\mbox{and}\quad m_{0}^{(k)}=m^{(0)}\,\f{[1+(-1)^{k}]}{2\,(k+1)}. \end{equation} When considering the expansion in Eq.~\eqref{eq:expansionSimple}, the moments approach is simply an approximation to the angular dependence of the distribution function in terms of the monomial basis $\{\mu_{k}\}_{k=0}^{N}$. The power of the moments approach becomes evident when collisions are moderate to strong. In this case, collisions tend to drive the zeroth moment $m^{(0)}$ towards the isotropic distribution $f_{0}$, the higher-order odd moments decay exponentially to zero ($m^{(k)}\to0$; $k$ odd), and the higher-order even moments tend to $m^{(k)}\to m^{(0)}/(k+1)$ ($k$ even). Thus, the angular dependence of the distribution is captured well by only a few moments. In the absence of collisions, more moments are typically needed to capture the angular shape of the distribution function. Note in particular that in Eq.~\eqref{eq:higherMomentEquationSimple}, the equation for the $k$-th moment contains the $k+1$-th moment. Thus, in a truncated moment model based on $N+1$ moments, $\mathbf{m}_{N}$, the equation for $m^{(N)}$ contains the moment $m^{(N+1)}$, which must be related to the lower order moments by a closure procedure---i.e., $m^{(N+1)}:=g(\mathbf{m}_{N})$---in order to form a closed system of equations. This is referred to as the closure problem. Typically, the closure function $g$ is a nonlinear function of $\mathbf{m}_{N}$, which can make the construction of numerical methods for moment models more difficult. There are several challenges associated with the construction of closures for moment hierarchies \citep[see, e.g.,][]{Le96}, one being the construction of closures that preserve the hyperbolic character of the system of moment equations; see, e.g., \citet{PoIbMi00}, for a discussion of this topic in the context of two-moment models. In the remainder of this section, we will discuss relativistic two-moment models ($N=1$ in the simpler formalism above). In the multi-dimensional setting, the two-moment model evolves four unknowns (e.g., the energy density and three components of the momentum density), and, in the relativistic setting considered here, second and third moments appear in the equations for the first moments. Conservative, 3+1 general relativistic, multi-frequency (or multi-energy) two-moment formalisms have been developed by \citet{ShKiSe11,CaEnMe13a}. The formalism of \citet{ShKiSe11} is based on the formalism of \citet{Th81}, while the formalism of \citet{CaEnMe13a} starts out with the conservative formulation of kinetic theory from \citet{CaMe03} discussed in Sect.~\ref{sec:GeneralRelativisticBoltzmannEquation}. Both approaches, of course, lead to the same result, which we summarize here. Covariant expressions for the first few moments of the distribution function $f$ are given by \begin{align} N^{\mu}(x,t) &= \int_{V_{p}} f(p,x,t)\,p^{\mu}\,\pi_{m}, \label{eq:numberMoments} \\ T^{\mu\nu}(x,t) &=\int_{V_{p}} f(p,x,t)\,p^{\mu}\,p^{\nu}\,\pi_{m}, \label{eq:stressEnergyMoments} \\ Q^{\mu\nu\rho}(x,t) &=\int_{V_{p}} f(p,x,t)\,p^{\mu}\,p^{\nu}\,p^{\rho}\,\pi_{m}, \label{eq:heatFluxMoments} \end{align} where $N^{\mu}$ is the four-current density, $T^{\mu\nu}$ the stress-energy tensor, and the rank three tensor of moments $Q^{\mu\nu\rho}$ is sometimes referred to as the tensor of fluxes or heat flux tensor. When expressed in terms of comoving frame spherical-polar momentum coordinates $(\varepsilon,\vartheta,\varphi)$, the invariant momentum-space 3-volume in Eq.~\eqref{momentumElement} is \begin{equation} \pi_{m} = \varepsilon\,\sin\vartheta\,d\vartheta\,d\varphi\,d\varepsilon. \end{equation} Higher-order moments can be constructed similarly in a straightforward way, but we will limit the discussion to moment models involving the moments in Eqs.~\eqref{eq:numberMoments}-\eqref{eq:heatFluxMoments}. Note that the moments defined above depend only on position $x$ and time $t$. However, because neutrino heating and cooling rates are sensitive to the neutrino energy (cf.\ Sect.~\ref{sec:SettingTheStage}), supernova models based on moment descriptions for neutrino transport retain the energy dimension and solve for \emph{angular moments}, or \emph{spectral moments}, defined by \begin{align} \mathcal{N}^{\mu}(\varepsilon,x,t) &=\f{1}{4\pi}\int_{\mathbb{S}^{2}}f\,p^{\mu}\,\f{d\omega}{\varepsilon}, \label{eq:numberAngularMoments} \\ \mathcal{T}^{\mu\nu}(\varepsilon,x,t) &=\f{1}{4\pi}\int_{\mathbb{S}^{2}}f\,p^{\mu}\,p^{\nu}\,\f{d\omega}{\varepsilon}, \label{eq:stressEnergyAngularMoments} \\ \mathcal{Q}^{\mu\nu\rho}(\varepsilon,x,t) &=\f{1}{4\pi}\int_{\mathbb{S}^{2}}f\,p^{\mu}\,p^{\nu}\,p^{\rho}\,\f{d\omega}{\varepsilon}, \label{eq:heatFluxAngularMoments} \end{align} where $d\omega=\sin\vartheta d\vartheta d\varphi$ and the integrals extend over the sphere \begin{equation} \mathbb{S}^{2} = \big\{\,\omega\in(\vartheta,\varphi)~|~\vartheta\in[0,\pi],\,\varphi\in[0,2\pi)\,\big\}, \end{equation} where $\vartheta$ and $\varphi$ are momentum-space angular coordinates. The angular moments defined in Eqs.~\eqref{eq:numberAngularMoments}-\eqref{eq:heatFluxAngularMoments} depend on the neutrino energy, $\varepsilon$, position, $x$, and time, $t$. They are related to the moments in Eqs.~\eqref{eq:numberMoments}-\eqref{eq:heatFluxMoments} by the integral over energy \begin{equation} \big\{\,N^{\mu},\,T^{\mu\nu},\,Q^{\mu\nu\rho}\,\big\}(x,t) =\int_{0}^{\infty}\big\{\,\mathcal{N}^{\mu},\,\mathcal{T}^{\mu\nu},\,\mathcal{Q}^{\mu\nu\rho}\,\big\}(\varepsilon,x,t)\,dV_{\varepsilon}, \end{equation} where the infinitesimal energy-space shell-volume element is $dV_{\varepsilon}=4\pi\varepsilon^{2}d\varepsilon$. In forming the angular moments we have used the freedom in choosing distinct spacetime and momentum space coordinates: $x$ and $t$ are spacetime coordinates in a global coordinate basis, while $\{\varepsilon,\vartheta,\varphi\}$ are momentum coordinates in a comoving basis. Moment equations governing the evolution of the angular moments are derived from the general relativistic Boltzmann equation discussed in Sect.~\ref{sec:GeneralRelativisticBoltzmannEquation}. Since current supernova modelers employing angular moment models use either a flux-limited diffusion (one-moment) or a two-moment approach, we will limit the discussion to these approaches. In this context, we will need evolution equations for the spectral neutrino number density, energy density, and three-momentum density. The evolution equation for the neutrino number density is obtained by multiplying Eq.~\eqref{consBE} by $1/(4\pi\varepsilon)$ and integrating over $\mathbb{S}^{2}$: \begin{equation} \nabla_{\nu}\mathcal{N}^{\nu} -\f{1}{\varepsilon^{2}}\pderiv{}{\varepsilon}\big(\,\varepsilon^{2}\,\mathcal{T}^{\mu\nu}\,\nabla_{\mu}u_{\nu}\,\big) =\f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}[f]\,\f{d\omega}{\varepsilon}, \label{eq:spectralNumberEquation} \end{equation} where $u_{\nu}$ is the four-velocity of the observer measuring neutrino energy $\varepsilon$ (i.e., the comoving observer). Note that the left-hand side of Eq.~\eqref{eq:spectralNumberEquation} is in divergence form, and the use of spherical momentum-space coordinates is apparent from the form of the second term. Integrating over energy ($dV_{\varepsilon}$) gives rise to the balance equation \begin{equation} \nabla_{\nu}N^{\nu} = \int_{V_{p}}\mathcal{C}[f]\,\pi_{m}, \label{eq:numberEquation} \end{equation} where the left-hand side is in conservative form. The right-hand side gives rise to lepton exchange sources and sinks due to neutrino--matter interactions (e.g., emission and absorption). In a similar manner, conservative evolution equations for the neutrino four-momentum are obtained by multiplying the four-momentum conservative Boltzmann equation in Eq.~\eqref{eq:fourMomentumConservativeBoltzmann} by $1/(4\pi\varepsilon)$ and integrating over $\mathbb{S}^{2}$: \begin{equation} \nabla_{\nu}\mathcal{T}^{\mu\nu} -\f{1}{\varepsilon^{2}}\pderiv{}{\varepsilon}\big(\,\varepsilon^{2}\,\mathcal{Q}^{\mu\nu\rho}\,\nabla_{\nu}u_{\rho}\,\big) =\f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}[f]\,p^{\mu}\,\f{d\omega}{\varepsilon}. \label{eq:spectralFourMomentumEquation} \end{equation} Again, integrating this equation over energy results in the balance equation \begin{equation} \nabla_{\nu}T^{\mu\nu} = \int_{V_{p}}\mathcal{C}[f]\,p^{\mu}\,\pi_{m}, \label{eq:fourMomentumEquation} \end{equation} where the left-hand side is in conservative form, and the right-hand side gives rise to four-momentum exchange with the fluid. Eq.~\eqref{eq:spectralFourMomentumEquation} forms a basis for the two-moment model for neutrino transport. Since neutrinos exchange lepton number and four-momentum with the fluid, Eq.~\eqref{eq:spectralNumberEquation} needs to be considered, as well. However, these equations are not independent. Due to the relations (obvious from the definitions in Eqs.~\eqref{eq:numberAngularMoments}-\eqref{eq:heatFluxAngularMoments}) \begin{equation} \mathcal{N}^{\nu} = -\f{u_{\mu}}{\varepsilon}\,\mathcal{T}^{\mu\nu} \quad\text{and}\quad \mathcal{T}^{\nu\rho} = -\f{u_{\mu}}{\varepsilon}\,\mathcal{Q}^{\mu\nu\rho}, \end{equation} Eqs.~\eqref{eq:spectralNumberEquation} and \eqref{eq:spectralFourMomentumEquation} are related in a similar way: Eq~\eqref{eq:spectralNumberEquation} can be obtained from Eq.~\eqref{eq:spectralFourMomentumEquation} by a contraction with $-u_{\mu}/\varepsilon$. In a numerical implementation targeting both lepton and four-momentum exchange between neutrinos and the stellar fluid, such consistency is desirable since the numerical method then preserves a critical structure of the moment system. In the following, we provide versions of the two-moment model in the 3+1 framework of general relativity. Before we delve into the details, we briefly discuss two useful decompositions of the angular moments. \subsubsection{Lagrangian decompositions} \label{sec:LagrangianDecompositions} With comoving frame four-momentum coordinates, Lagrangian decompositions of tensors is a natural way to express the angular moments in Eqs.~\eqref{eq:numberAngularMoments}-\eqref{eq:heatFluxAngularMoments} in terms of elementary moments of the distribution function. This is achieved with the Lagrangian decomposition of the particle four-momentum \begin{equation} p^{\mu} = \varepsilon\,(\,u^{\mu}+\ell^{\mu}\,), \label{eq:fourMomentumLagrangianDecomposition} \end{equation} where $u^{\mu}$ is the four-velocity of the Lagrangian observer, and $\ell^{\mu}$ is a unit four-vector orthogonal to $u^{\mu}$; i.e., $\ell_{\mu}\ell^{\mu}=1$ and $u_{\mu}\ell^{\mu}=0$. Then, $\varepsilon=-u_{\mu}p^{\mu}$ is the neutrino energy measured by the Lagrangian observer. In terms of the composite transformation of the neutrino four-momentum, $p^{\mu}=\mathcal{L}^{\mu}_{\hspace{6pt}\hat{\mu}}p^{\hat{\mu}}=\varepsilon\big(\mathcal{L}^{\mu}_{\hspace{6pt}\hat{0}}+\mathcal{L}^{\mu}_{\hspace{6pt}\hat{\imath}}\,\ell^{\hat{\imath}}\big)$, a comparison with Eq.~\eqref{eq:fourMomentumLagrangianDecomposition} implies that $\mathcal{L}^{\mu}_{\hspace{6pt}\hat{0}}=u^{\mu}$ and $\ell^{\mu}=\mathcal{L}^{\mu}_{\hspace{6pt}\hat{\imath}}\,\ell^{\hat{\imath}}$, where \begin{equation} \ell^{\hat{\imath}} = \big\{\,\cos\vartheta,\,\sin\vartheta\cos\varphi,\,\sin\vartheta\sin\varphi\,\big\} \end{equation} are components of the spatial unit vector in the orthonormal comoving frame. (See Section~\eqref{sec:GeneralRelativisticBoltzmannEquation} for the definition of $\mathcal{L}^{\mu}_{\hspace{6pt}\hat{\mu}}$.) Inserting Eq.~\eqref{eq:fourMomentumLagrangianDecomposition} into Eq.~\eqref{eq:numberAngularMoments} results in the Lagrangian decomposition of the spectral neutrino four-current density \begin{equation} \mathcal{N}^{\mu} = \mathcal{D}\,u^{\mu} + \mathcal{I}^{\mu}, \label{eq:numberCurrentLagrangianDecomposition} \end{equation} where the angular moments \begin{equation} \big\{\mathcal{D},\mathcal{I}^{\mu}\big\}(\varepsilon,x,t) = \f{1}{4\pi}\int_{\mathbb{S}^{2}}f(\omega,\varepsilon,x,t)\,\big\{\,1,\,\ell^{\mu}\,\big\}\,d\omega \label{eq:numberMomentsLagrangian} \end{equation} are the comoving spectral number density and number flux, respectively. Using the fluid four-velocity $u^{\mu}$ and the projector in Eq.~\eqref{eq:projectorLagrangian}, these components are obtained from $\mathcal{D}=-u_{\mu}\mathcal{N}^{\mu}$ and $\mathcal{I}^{\mu}=h^{\mu}_{\hspace{6pt}\nu}\mathcal{I}^{\nu}$. The moments in Eq.~\eqref{eq:numberMomentsLagrangian} are the most elementary in the moment hierarchy, and for the two-moment model, these are used in the closure procedure to determine the higher-order moments in terms of $\mathcal{D}$ and $\mathcal{I}^{\mu}$. Note that for an isotropic distribution function $f=f_{0}$ (where $f_{0}$ is independent of $\omega$), $\mathcal{D}=f_{0}$ and $\mathcal{I}^{\mu}=0$. In a similar way, using Eq.~\eqref{eq:fourMomentumLagrangianDecomposition} in Eq.~\eqref{eq:stressEnergyAngularMoments}, the Lagrangian decomposition of the stress-energy tensor is given by \begin{equation} \mathcal{T}^{\mu\nu} = \mathcal{J}\,u^{\mu}\,u^{\nu} + \mathcal{H}^{\mu}\,u^{\nu} + u^{\mu}\,\mathcal{H}^{\nu} + \mathcal{K}^{\mu\nu}, \label{eq:stressEnergyLagrangianDecomposition} \end{equation} where \begin{equation} \big\{\,\mathcal{J},\,\mathcal{H}^{\mu},\,\mathcal{K}^{\mu\nu}\,\big\}(\varepsilon,x,t) = \f{\varepsilon}{4\pi}\int_{\mathbb{S}^{2}}f(\omega,\varepsilon,x,t)\,\big\{1,\,\ell^{\mu},\,\ell^{\mu}\ell^{\nu}\,\big\}\,d\omega, \label{eq:energyMomentsLagrangian} \end{equation} and $\mathcal{H}^{\mu}$ and $\mathcal{K}^{\mu\nu}$ are orthogonal to $u_{\mu}$ (spacelike in the comoving frame); i.e., $u_{\mu}\mathcal{H}^{\mu}=u_{\mu}\mathcal{K}^{\mu\nu}=u_{\nu}\mathcal{K}^{\mu\nu}=0$. In Eq.~\eqref{eq:energyMomentsLagrangian}, $\mathcal{J}$, $\mathcal{H}^{\mu}$, and $\mathcal{K}^{\mu\nu}$ are respectively the spectral energy density, momentum density, and stress measured by a Lagrangian observer. The four-velocity $u_{\mu}$ and the associated orthogonal projector $h_{\mu\nu}$ are used to extract components of the Lagrangian decompositions of $\mathcal{T}^{\mu\nu}$: \begin{equation} \mathcal{J} = u_{\mu}\,u_{\nu}\,\mathcal{T}^{\mu\nu}, \quad \mathcal{H}^{\mu} =-u_{\nu}\,h^{\mu}_{\hspace{6pt}\rho}\,\mathcal{T}^{\nu\rho}, \quad\text{and}\quad \mathcal{K}^{\mu\nu} =h^{\mu}_{\hspace{6pt}\rho}\,h^{\nu}_{\hspace{6pt}\sigma}\,\mathcal{T}^{\rho\sigma}. \label{eq:stressEnergyLagrangianExtractions} \end{equation} Note that the Lagrangian energy density and momentum density are related to the number density and flux by a factor $\varepsilon$; i.e., \begin{equation} \big\{\,\mathcal{J},\,\mathcal{H}^{\mu}\,\big\} = \varepsilon\,\big\{\,\mathcal{D},\,\mathcal{I}^{\mu}\,\big\}. \end{equation} Finally, a Lagrangian decomposition of the rank-three tensor in Eq.~\eqref{eq:heatFluxAngularMoments} gives \begin{align} \mathcal{Q}^{\mu\nu\rho} &= \varepsilon\,\big(\, \mathcal{J}\,u^{\mu}\,u^{\nu}\,u^{\rho} + \mathcal{H}^{\mu}\,u^{\nu}\,u^{\rho} + \mathcal{H}^{\nu}\,u^{\mu}\,u^{\rho} + \mathcal{H}^{\rho}\,u^{\mu}\,u^{\nu} \nonumber \\ &\hspace{32pt} + \mathcal{K}^{\mu\nu}\,u^{\rho} +\mathcal{K}^{\mu\rho}\,u^{\nu} +\mathcal{K}^{\nu\rho}\,u^{\mu} + \mathcal{L}^{\mu\nu\rho} \,\big), \label{eq:heatFluxLagrangianDecomposition} \end{align} where the spectral rank-three tensor measured by a Lagrangian observer, \begin{equation} \mathcal{L}^{\mu\nu\rho}(\varepsilon,x,t) = \f{\varepsilon}{4\pi}\int_{\mathbb{S}^{2}}f(\omega,\varepsilon,x,t)\,\ell^{\mu}\ell^{\nu}\ell^{\rho}\,d\omega, \label{eq:heatFluxMomentsLagrangian} \end{equation} is orthogonal to $u_{\mu}$---i.e., $u_{\mu}\mathcal{L}^{\mu\nu\rho}=u_{\nu}\mathcal{L}^{\mu\nu\rho}=u_{\rho}\mathcal{L}^{\mu\nu\rho}=0$---and is obtained from $\mathcal{Q}^{\mu\nu\rho}$ using the orthogonal projector: \begin{equation} \mathcal{L}^{\mu\nu\rho} = \f{1}{\varepsilon}\,h^{\mu}_{\hspace{4pt}\sigma}\,h^{\nu}_{\hspace{4pt}\kappa}\,h^{\rho}_{\hspace{4pt}\lambda}\,\mathcal{Q}^{\sigma\kappa\lambda}. \end{equation} \subsubsection{Eulerian decompositions} \label{sec:EulerianDecompositions} Eulerian projections of tensors are particularly useful when deriving evolution equations in the context of moment models for neutrino transport, as it is the Eulerian number density, energy density, and three-momentum density that are governed by conservation laws. In a manner similar to the Lagrangian decomposition in Eq.~\eqref{eq:numberCurrentLagrangianDecomposition}, the Eulerian decomposition of the spectral number current density is \begin{equation} \mathcal{N}^{\mu} = \mathcal{N}\,n^{\mu} + \mathcal{G}^{\mu}, \label{eq:numberCurrentEulerianDecomposition} \end{equation} where $n_{\mu}\,\mathcal{G}^{\mu}=0$. The four-velocity $n_{\mu}$ and the projector $\gamma_{\mu\nu}=g_{\mu\nu}+n_{\mu}\,n_{\nu}$ can be used to extract the Eulerian components \begin{equation} \mathcal{N} = -n_{\mu}\,\mathcal{N}^{\mu} \quad\text{and}\quad \mathcal{G}^{\mu} = \gamma^{\mu}_{\hspace{6pt}\nu}\,\mathcal{N}^{\nu}, \label{eq:numberCurrentEulerianExtractions} \end{equation} where $\mathcal{N}$ and $\mathcal{G}^{\mu}$ are the spectral number density and number flux density measured by an Eulerian observer, respectively. Note that $\mathcal{N}$ and $\mathcal{G}^{\mu}$ are still considered functions of $\varepsilon$, the neutrino energy measured by a Lagrangian observer. Thus, the definition in Eq.~\eqref{eq:numberCurrentEulerianDecomposition} should merely be viewed as a decomposition of $\mathcal{N}^{\mu}$ in a different basis than in Eq.~\eqref{eq:numberCurrentLagrangianDecomposition}, not as moments of the distribution with respect to Eulerian momentum coordinates. Inserting the Lagrangian decomposition in Eq.~\eqref{eq:numberCurrentLagrangianDecomposition} into the expressions in Eq.~\eqref{eq:numberCurrentEulerianExtractions}, the Eulerian number density and number flux density are expressed in terms of the Lagrangian number density and number flux density as \begin{align} \mathcal{N} &= W\,\mathcal{D} + v_{\mu}\,\mathcal{I}^{\mu}, \label{eq:eulerianNumberInTermsOfLagrangianMoments} \\ \mathcal{G}^{\mu} &=\big[\,\delta^{\mu}_{\hspace{6pt}\nu}-n^{\mu}v_{\nu}\,\big]\mathcal{I}^{\nu} + W\,\mathcal{D}\,v^{\mu}. \label{eq:eulerianNumberFluxInTermsOfLagrangianMoments} \end{align} Similarly, the Eulerian decomposition of the stress-energy tensor is \begin{equation} \mathcal{T}^{\mu\nu} = \mathcal{E}\,n^{\mu}\,n^{\nu} + \mathcal{F}^{\mu}\,n^{\nu} + n^{\mu}\,\mathcal{F}^{\nu} + \mathcal{S}^{\mu\nu}, \label{eq:stressEnergyEulerianDecomposition} \end{equation} where $\mathcal{E}$, $\mathcal{F}^{\mu}$, and $\mathcal{S}^{\mu\nu}$ are respectively the spectral energy density, momentum density, and stress measured by an Eulerian observer. The Eulerian momentum density and stress are spacelike (i.e., $n_{\mu}\mathcal{F}^{\mu}=n_{\mu}\mathcal{S}^{\mu\nu}=n_{\nu}\mathcal{S}^{\mu\nu}=0$), and the components of the Eulerian decomposition of $\mathcal{T}^{\mu\nu}$ are extracted using $n_{\mu}$ and the associated orthogonal projector $\gamma_{\mu\nu}$: \begin{equation} \mathcal{E} = n_{\mu}\,n_{\nu}\,\mathcal{T}^{\mu\nu}, \quad \mathcal{F}^{\mu} =-n_{\nu}\,\gamma^{\mu}_{\hspace{6pt}\rho}\,\mathcal{T}^{\nu\rho}, \quad \mathcal{S}^{\mu\nu} =\gamma^{\mu}_{\hspace{6pt}\rho}\,\gamma^{\nu}_{\hspace{6pt}\sigma}\,\mathcal{T}^{\rho\sigma}. \label{eq:stressEnergyEulerianExtractions} \end{equation} Inserting the Lagrangian decomposition in Eq.~\eqref{eq:stressEnergyLagrangianDecomposition} into the expressions in Eq.~\eqref{eq:stressEnergyEulerianExtractions}, the Eulerian energy density, momentum density, and stress are expressed in terms of the corresponding Lagrangian quantities as \citep[cf.\ Equations~(B8)--(B10) in][]{CaEnMe13a} \begin{align} \mathcal{E} &=W^{2}\mathcal{J} + 2\,W\,v_{\mu}\,\mathcal{H}^{\mu} + v_{\mu}\,v_{\nu}\,\mathcal{K}^{\mu\nu}, \label{eq:eulerianEnergyInTermsOfLagrangianMoments} \\ \mathcal{F}^{\mu} &=W\,v^{\mu}\,\big(\,W\mathcal{J} + v_{\nu}\,\mathcal{H}^{\nu}\,\big) + \big[\,\delta^{\mu}_{\hspace{6pt}\rho}-n^{\mu}\,v_{\rho}\,\big]\,\big(\,W\mathcal{H}^{\rho}+v_{\nu}\mathcal{K}^{\nu\rho}\,\big), \label{eq:eulerianMomentumInTermsOfLagrangianMoments} \\ \mathcal{S}^{\mu\nu} &=W^{2}\mathcal{J}v^{\mu}v^{\nu} + Wv^{\nu}\big[\,\delta^{\mu}_{\hspace{6pt}\rho}-n^{\mu}v_{\rho}\,\big]\,\mathcal{H}^{\rho} +Wv^{\mu}\big[\,\delta^{\nu}_{\hspace{6pt}\sigma}-n^{\nu}v_{\sigma}\,\big]\mathcal{H}^{\sigma} \nonumber \\ &\hspace{24pt} +\big[\,\delta^{\mu}_{\hspace{6pt}\rho}-n^{\mu}v_{\rho}\,\big]\big[\,\delta^{\nu}_{\hspace{6pt}\sigma}-n^{\nu}v_{\sigma}\,\big]\mathcal{K}^{\rho\sigma}. \label{eq:eulerianStressInTermsOfLagrangianMoments} \end{align} Finally, and similar to Eqs.~\eqref{eq:numberCurrentEulerianDecomposition} and \eqref{eq:stressEnergyEulerianDecomposition}, the Eulerian decomposition of the rank three tensor in Eq.~\eqref{eq:heatFluxAngularMoments} is given by \begin{align} \mathcal{Q}^{\mu\nu\rho} &=\varepsilon\,\big(\, \mathcal{X}\,n^{\mu}\,n^{\nu}\,n^{\rho} + \mathcal{Y}^{\mu}\,n^{\nu}\,n^{\rho} + \mathcal{Y}^{\nu}\,n^{\mu}\,n^{\rho} + \mathcal{Y}^{\rho}\,n^{\mu}\,n^{\nu} \nonumber \\ &\hspace{32pt} + \mathcal{Z}^{\mu\nu}\,n^{\rho} +\mathcal{Z}^{\mu\rho}\,n^{\nu} +\mathcal{Z}^{\nu\rho}\,n^{\mu} + \mathcal{W}^{\mu\nu\rho} \,\big), \label{eq:heatFluxEulerianDecomposition} \end{align} where the Eulerian components are obtained from \begin{align} \mathcal{X} &= - \f{1}{\varepsilon}\,n_{\mu}\,n_{\nu}\,n_{\rho}\,\mathcal{Q}^{\mu\nu\rho}, \\ \mathcal{Y}^{\mu} &=\f{1}{\varepsilon}\,\gamma^{\mu}_{\hspace{4pt}\sigma}\,n_{\nu}\,n_{\rho}\,\mathcal{Q}^{\sigma\nu\rho}, \\ \mathcal{Z}^{\mu\nu} &=-\f{1}{\varepsilon}\,\gamma^{\mu}_{\hspace{4pt}\sigma}\,\gamma^{\nu}_{\hspace{4pt}\kappa}\,n_{\rho}\,\mathcal{Q}^{\sigma\kappa\rho}, \\ \mathcal{W}^{\mu\nu\rho} &=\f{1}{\varepsilon}\,\gamma^{\mu}_{\hspace{4pt}\sigma}\,\gamma^{\nu}_{\hspace{4pt}\kappa}\,\gamma^{\rho}_{\hspace{4pt}\lambda}\,\mathcal{Q}^{\sigma\kappa\lambda}. \end{align} These components can be expressed in terms of the Lagrangian moments by inserting the Lagrangian decomposition in Eq.~\eqref{eq:heatFluxLagrangianDecomposition}. We will not repeat these tedious expressions here, but see Eqs.~(B15), (B16), (B17), and (B18) in \citet{CaEnMe13a} for expressions relating respectively $\mathcal{Z}$, $\mathcal{Y}_{\mu}$, $\mathcal{Z}_{\mu\nu}$, and $\mathcal{W}_{\mu\nu\rho}$ in terms of the Lagrangian moments $\mathcal{J}$, $\mathcal{H}^{\mu}$, $\mathcal{K}^{\mu\nu}$, and $\mathcal{L}^{\mu\nu\rho}$ (note the difference of the factor of $\varepsilon$ between our definition of $\mathcal{Q}^{\mu\nu\rho}$ and the corresponding variable in \citet{CaEnMe13a}). While components of Lagrangian decompositions are more closely related to the distribution function, Eulerian decompositions appear to be more natural to use in the 3+1 approach, and powerful in simplifying terms appearing in the moment equations, especially for the energy derivative terms in Eqs.~\eqref{eq:spectralNumberEquation} and \eqref{eq:spectralFourMomentumEquation}, which contain contractions with the covariant derivative of the fluid four-velocity. As elaborated on in \citet{CaEnMe13a}, Eulerian decompositions of $\mathcal{T}^{\mu\nu}$ and $\mathcal{Q}^{\mu\nu\rho}$, in combination with the Eulerian decomposition of $u^{\mu}$, in Eq.~\eqref{eq:fluidFourVelocityEulerian} result in surprisingly simple expressions, without explicit reference to connection coefficients (cf.\ Eq.~\eqref{eq:ConnectionComoving}). Moreover, as emphasized by \citet{CaEnMe13a}, consistent use of Eulerian decompositions in spacetime and momentum-space divergences in the moment equations turns out to simplify the elucidation of the relationship between the equations for four-momentum and number conservation in the 3+1 case. \subsubsection{Two-moment kinetics} \label{sec:TwoMoment} In this section we review two-moment models in the 3+1 formulation of general relativity, which can serve as a basis for the development of numerical methods and their implementation in codes to model neutrino transport in core-collapse supernovae. We present three versions, all based on Eq.~\eqref{eq:spectralFourMomentumEquation}, but using different projections. The projection of Eq.~\eqref{eq:spectralFourMomentumEquation} orthogonal and tangential to the spacelike slice of the Eulerian observer (using $n_{\mu}$ and $\gamma_{\mu\nu}$) gives rise to the \emph{Eulerian} two-moment model, while the projection of Eq.~\eqref{eq:spectralFourMomentumEquation} orthogonal and tangential to the spacelike slice of the Lagrangian observer (using $u_{\mu}$ and $h_{\mu\nu}$) gives rise to the \emph{Lagrangian} two-moment model. We also present a \emph{number conservative} two-moment model, which is closely related to the Lagrangian two-moment model, but uses projections based on $u_{\mu}/\varepsilon$ and $h_{\mu\nu}/\varepsilon$. This results in one of the evolved equations being Eq.~\eqref{eq:spectralNumberEquation}, which is neutrino number conservative. Analytically, all these formulations are equivalent, but they could have different numerical properties. \paragraph{Eulerian two-moment model} The Eulerian two-moment model evolves the spectral energy density and momentum density measured by an Eulerian observer ($\mathcal{E}$ and $\mathcal{F}_{j}$, respectively). The energy equation is obtained as the projection of Eq.~\eqref{eq:spectralFourMomentumEquation} onto the four-velocity of the Eulerian observer (i.e., contracting $-n_{\mu}$ with Eq.~\eqref{eq:spectralFourMomentumEquation}). The result is: \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\,\pd{}{t}\big(\,\sqrt{\gamma}\,\mathcal{E}\,\big)+\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,\mathcal{F}^{i}-\beta^{i}\,\mathcal{E}\,\big]\,\big)\,\big] -\f{1}{\varepsilon^{2}}\pderiv{}{\varepsilon}\big(\,\varepsilon^{2}\,(-n_{\mu})\,\mathcal{Q}^{\mu\nu\rho}\,\nabla_{\nu}u_{\rho}\,\big) \nonumber \\ &\hspace{0pt} =\f{1}{\alpha}\,\big[\,\alpha\,\mathcal{S}^{ij}\,\mathsf{K}_{ij}-\mathcal{F}^{i}\,\pd{\alpha}{i}\,\big] +\f{W}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}(f)\,d\omega+\f{v^{j}}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}(f)\,\ell_{j}\,d\omega, \label{eq:spectralEulerianEnergyEquation_3p1} \end{align} where the sources on the right-hand side are due to spacetime geometry and energy exchange between neutrinos and the fluid. The left-hand side is in divergence form, where the divergence operates on the spacetime-plus-energy phase-space. In expressing the terms inside the energy derivative (last term on the left-hand side), we make use of the Eulerian decomposition in Eq.~\eqref{eq:heatFluxEulerianDecomposition} to write \begin{align} &-\f{n_{\mu}}{\varepsilon}\,\mathcal{Q}^{\mu\nu\rho}\,\nabla_{\nu}u_{\rho} \nonumber \\ &= \big(\, \mathcal{X}\,n^{\nu}\,n^{\rho} + \mathcal{Y}^{\nu}\,n^{\rho} + n^{\nu}\,\mathcal{Y}^{\rho} + \mathcal{Z}^{\nu\rho} \,\big)\,\nabla_{\nu}u_{\rho} \nonumber \\ &= \f{W}{\alpha}\, \Big\{\, \big(\,\mathcal{Y}^{i} - \mathcal{X}\,v^{i}\,\big)\,\pd{\alpha}{i} +\mathcal{Y}_{k}\,v^{i}\,\pd{\beta^{k}}{i} +\alpha\,\mathcal{Z}^{ki}\,\big(\,\f{1}{2}\,v^{m}\,\pd{\gamma_{ki}}{m} - \mathsf{K}_{ki}\,\big) \,\Big\} \nonumber \\ &\hspace{12pt} +\f{1}{\alpha}\, \Big\{\, \mathcal{Y}_{k}\,\pd{}{t}\big(Wv^{k}\big) - \mathcal{X}\,\pd{W}{t} - \big(\,\alpha\,\mathcal{Y}^{i} - \mathcal{X}\,\beta^{i}\,\big)\,\pd{W}{i} \nonumber \\ &\hspace{64pt} + \big(\,\alpha\,\mathcal{Z}_{k}^{\hspace{4pt}i} - \mathcal{Y}_{k}\,\beta^{i}\,\big)\,\pd{}{i}\big(Wv^{k}\big) \,\Big\}, \label{eq:observerCorrectionsEulerianEnergyEquation_3p1} \end{align} which account for changes in the spectral energy density due to gravitational energy shifts and the fact that adjacent comoving observer velocities in spacetime are generally different. The momentum equation is obtained as the projection of Eq.~\eqref{eq:spectralFourMomentumEquation} into the slice with normal given by $n^{\mu}$ (i.e., contracting $\gamma_{j\mu}$ with Eq.~\eqref{eq:spectralFourMomentumEquation}), which results in \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\pd{}{t}\big(\,\sqrt{\gamma}\mathcal{F}_{j}\,\big)+\pd{}{i}\big(\,\sqrt{\gamma}\big[\,\alpha\mathcal{S}^{i}_{\hspace{4pt}j}-\beta^{i}\mathcal{F}_{j}\,\big]\,\big)\big] -\f{1}{\varepsilon^{2}}\pderiv{}{\varepsilon}\big(\,\varepsilon^{2}\,\gamma_{j\mu}\,\mathcal{Q}^{\mu\nu\rho}\,\nabla_{\nu}u_{\rho}\,\big) \label{eq:spectralEulerianMomentumEquation_3p1} \\ &\hspace{0pt} =\f{1}{\alpha}\,\big[\,\mathcal{F}_{i}\,\pd{\beta^{i}}{j}+\f{1}{2}\,\alpha\,\mathcal{S}^{ik}\,\pd{\gamma_{ik}}{j}-\mathcal{E}\,\pd{\alpha}{j}\,\big] +\f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}(f)\,\ell_{j}\,d\omega+\f{Wv_{j}}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}(f)\,d\omega, \nonumber \end{align} where the right-hand side gives rise to changes in the spectral momentum density due to spacetime geometry and neutrino--matter interactions. Again, using the Eulerian decomposition in Eq.~\eqref{eq:heatFluxEulerianDecomposition}, the terms inside the energy derivative can be written as \begin{align} &\f{\gamma_{j\mu}}{\varepsilon}\,\mathcal{Q}^{\mu\nu\rho}\,\nabla_{\nu}u_{\rho} \nonumber \\ &= \big(\, \mathcal{Y}_{j}\,n^{\nu}\,n^{\rho} + \mathcal{Z}_{j}^{\hspace{4pt}\nu}\,n^{\rho} + \mathcal{Z}_{j}^{\hspace{4pt}\rho}\,n^{\nu} + \mathcal{W}_{j}^{\hspace{4pt}\nu\rho} \,\big)\,\nabla_{\nu}u_{\rho} \nonumber \\ &= \f{W}{\alpha}\, \Big\{\, \big(\,\mathcal{Z}_{j}^{\hspace{2pt}i} - \mathcal{Y}_{j}\,v^{i}\,\big)\,\pd{\alpha}{i} +\mathcal{Z}_{jk}\,v^{i}\,\pd{\beta^{k}}{i} +\alpha\,\mathcal{W}_{j}^{\hspace{2pt}ki}\,\big(\,\f{1}{2}\,v^{m}\,\pd{\gamma_{ki}}{m} - \mathsf{K}_{ki}\,\big) \,\Big\} \nonumber \\ &\hspace{12pt} +\f{1}{\alpha}\, \Big\{\, \mathcal{Z}_{jk}\,\pd{}{t}\big(Wv^{k}\big) - \mathcal{Y}_{j}\,\pd{W}{t} - \big(\,\alpha\,\mathcal{Z}_{j}^{\hspace{2pt}i} - \mathcal{Y}_{j}\,\beta^{i}\,\big)\,\pd{W}{i} \nonumber \\ &\hspace{64pt} + \big(\,\alpha\,\mathcal{W}_{jk}^{\hspace{4pt}i} - \mathcal{Z}_{jk}\,\beta^{i}\,\big)\,\pd{}{i}\big(Wv^{k}\big) \,\Big\}, \label{eq:observerCorrectionsEulerianMomentumEquation_3p1} \end{align} which account for changes in the spectral momentum density due to gravitational comoving observer effects. An obvious advantage of the Eulerian two-moment model given by Eqs.~\eqref{eq:spectralEulerianEnergyEquation_3p1} and \eqref{eq:spectralEulerianMomentumEquation_3p1} is the conservative form. Integrating these equations over energy space (using $dV_{\varepsilon}=4\pi\varepsilon^{2}d\varepsilon$) results in the radiation energy equation \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\,\pd{}{t}\big(\,\sqrt{\gamma}\,E\,\big)+\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,F^{i}-\beta^{i}\,E\,\big]\,\big)\,\big] \label{eq:EulerianEnergyEquation_3p1} \\ &=\f{1}{\alpha}\,\big[\,\alpha\,S^{ij}\,\mathsf{K}_{ij}-F^{i}\,\pd{\alpha}{i}\,\big] +W\int_{V_{p}}\mathcal{C}[f]\,\varepsilon\,\pi_{m} +v^{j}\int_{V_{p}}\mathcal{C}[f]\,\varepsilon\,\ell_{j}\,\pi_{m} \nonumber \end{align} and radiation momentum equation \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\,\pd{}{t}\big(\,\sqrt{\gamma}F_{j}\,\big)+\pd{}{i}\big(\,\sqrt{\gamma}\big[\,\alpha S^{i}_{\hspace{4pt}j}-\beta^{i}F_{j}\,\big]\,\big)\,\big] \label{eq:EulerianMomentumEquation_3p1} \\ &=\f{1}{\alpha}\,\big[\,F_{i}\,\pd{\beta^{i}}{j}+\f{1}{2}\,\alpha\,S^{ik}\,\pd{\gamma_{ik}}{j}-E\,\pd{\alpha}{j}\,\big] +\int_{V_{p}}\mathcal{C}[f]\,\varepsilon\,\ell_{j}\,\pi_{m} +Wv_{j}\int_{V_{p}}\mathcal{C}[f]\,\varepsilon\,\pi_{m}, \nonumber \end{align} where the energy-integrated Eulerian moments are given by \begin{equation} \big\{\,E,\,F^{\mu},\,S^{\mu\nu}\,\big\} = \int_{0}^{\infty}\big\{\,\mathcal{E},\,\mathcal{F}^{\mu},\,\mathcal{S}^{\mu\nu}\,\big\}\,dV_{\varepsilon}. \end{equation} Eqs.~\eqref{eq:EulerianEnergyEquation_3p1} and \eqref{eq:EulerianMomentumEquation_3p1} are conservation laws for radiation energy and momentum in the sense that in the case of Cartesian coordinates in flat spacetime, with no neutrino--matter interactions, the right-hand sides vanish, and the equations express exact conservation of radiation energy and momentum. The Eulerian two-moment model presented here is the basis for several codes developed to model neutrino transport in core-collapse supernovae \citep{OCon15,KuTaKo16,RoOtHa16}: the \textsc{GR1D} code \citep{OCon15} solves the equations in spherical symmetry; the \textsc{Zelmani} code \citep{RoOtHa16} solves the equations in three spatial dimensions, but does not include velocity dependent terms (i.e., $v^{i}=0$ in the transport equations); and \citet{KuTaKo16} solve the full system in three spatial dimensions. \paragraph{Lagrangian two-moment model} The Lagrangian two-moment model is an alternative to the Eulerian two-moment model discussed above, where the spectral energy density and momentum density measured by the Lagrangian observer with four-velocity $u_{\mu}$ are evolved ($\mathcal{J}$ and $\mathcal{H}_{j}$, respectively). The energy equation is obtained as the projection of Eq.~\eqref{eq:spectralFourMomentumEquation} along the four-velocity of the Lagrangian observer (i.e., contracting $-u_{\mu}$ with Eq.~\eqref{eq:spectralFourMomentumEquation}), which results in \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\,\pd{}{t}\big(\,\sqrt{\gamma}\,\big[\,W\mathcal{J}+v^{i}\mathcal{H}_{i}\,\big]\,\big) +\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,\mathcal{H}^{i}+\big(\,\alpha\,v^{i}-\beta^{i}\,\big)\,W\mathcal{J}\,\big]\,\big)\,\big] \nonumber \\ &\hspace{6pt} -\f{1}{\varepsilon^{2}}\pderiv{}{\varepsilon}\big(\,\varepsilon^{3}\,\mathcal{T}^{\mu\nu}\,\nabla_{\mu}u_{\nu}\,\big) =-\mathcal{T}^{\mu\nu}\nabla_{\mu}u_{\nu} + \f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}(f)\,d\omega, \label{eq:spectralLagrangianEnergyEquation_3p1} \end{align} where the contraction of the stress-energy equation with the covariant derivative of the Lagrangian observer's four-velocity is given in $3+1$ form as \begin{align} &\mathcal{T}^{\mu\nu}\,\nabla_{\mu}u_{\nu} \nonumber \\ &= \big(\, \mathcal{E}\,n^{\mu}\,n^{\nu} + \mathcal{F}^{\mu}\,n^{\nu} + n^{\mu}\,\mathcal{F}^{\nu} + \mathcal{S}^{\mu\nu} \,\big)\,\nabla_{\mu}u_{\nu} \nonumber \\ &= \f{W}{\alpha}\, \Big\{\, \big(\,\mathcal{F}^{i} - \mathcal{E}\,v^{i}\,\big)\,\pd{\alpha}{i} +\mathcal{F}_{k}\,v^{i}\,\pd{\beta^{k}}{i} +\alpha\,\mathcal{S}^{ki}\,\big(\,\f{1}{2}\,v^{m}\,\pd{\gamma_{ki}}{m} - \mathsf{K}_{ki}\,\big) \,\Big\} \nonumber \\ &\hspace{12pt} +\f{1}{\alpha}\, \Big\{\, \mathcal{F}_{k}\,\pd{}{t}\big(Wv^{k}\big) - \mathcal{E}\,\pd{W}{t} - \big(\,\alpha\,\mathcal{F}^{i} - \mathcal{E}\,\beta^{i}\,\big)\,\pd{W}{i} \nonumber \\ &\hspace{64pt} + \big(\,\alpha\,\mathcal{S}_{k}^{\hspace{4pt}i} - \mathcal{F}_{k}\,\beta^{i}\,\big)\,\pd{}{i}\big(Wv^{k}\big) \,\Big\}, \label{eq:observerCorrectionsLagrangianEnergyEquation_3p1} \end{align} which accounts for changes to the spectral energy density from gravitational effects and from the fact that adjacent comoving observers in spacetime have different velocities. In Eq.~\eqref{eq:observerCorrectionsLagrangianEnergyEquation_3p1}, we made use of the Eulerian decomposition of the stress-energy tensor, which, as discussed at the end of Sect.~\ref{sec:EulerianDecompositions}, is more convenient than using the Lagrangian decomposition, since it keeps the number of terms in the expression to a minimum and simplifies book-keeping. The components of the Eulerian decomposition are related to the Lagrangian components by Eqs.~\eqref{eq:eulerianEnergyInTermsOfLagrangianMoments}-\eqref{eq:eulerianStressInTermsOfLagrangianMoments}. The Lagrangian momentum equation is obtained by projecting Eq.~\eqref{eq:spectralFourMomentumEquation} tangential to the slice with $u^{\mu}$ as the normal (i.e., contracting $h_{j\mu}$ with Eq.~\eqref{eq:spectralFourMomentumEquation}), which gives \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\,\pd{}{t}\big(\,\sqrt{\gamma}\,\big[\,W\mathcal{H}_{j}+v^{i}\mathcal{K}_{ij}\,\big]\,\big) +\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,\mathcal{K}^{i}_{\hspace{4pt}j}+\big(\,\alpha\,v^{i}-\beta^{i}\,\big)\,W\mathcal{H}_{j}\,\big]\,\big)\,\big] \label{eq:spectralLagrangianMomentumEquation_3p1} \\ &\hspace{6pt} -\f{1}{\varepsilon^{2}}\pderiv{}{\varepsilon}\big(\,\varepsilon^{2}\,h_{j\mu}\,\mathcal{Q}^{\mu\nu\rho}\,\nabla_{\nu}u_{\rho}\,\big) =\mathcal{T}^{\mu\nu}\,\big(\,\nabla_{\nu}h_{j\mu} + \Gamma^{\rho}_{\hspace{4pt}j\nu}h_{\rho\mu}\,\big) + \f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}(f)\,\ell_{j}\,d\omega, \nonumber \end{align} where the ``geometry'' source on the right-hand side can be written as \begin{align} &\mathcal{T}^{\mu\nu}\,\big(\,\nabla_{\nu}h_{j\mu} + \Gamma^{\rho}_{\hspace{4pt}j\nu}h_{\rho\mu}\,\big) \nonumber \\ &\hspace{12pt} =\f{1}{2}\,\mathcal{T}^{\mu\nu}\,\pd{g_{\mu\nu}}{j} +Wv_{j}\,\mathcal{T}^{\mu\nu}\nabla_{\mu}u_{\nu} +u_{\mu}\mathcal{T}^{\mu\nu}\,\pd{}{\nu}\big(\,Wv_{j}\,\big). \label{eq:spectralLagrangianMomentumEquationGeometrySource_3p1} \end{align} Again, using the Eulerian decomposition of $\mathcal{T}^{\mu\nu}$, the first term on the right-hand side of Eq.~\eqref{eq:spectralLagrangianMomentumEquationGeometrySource_3p1} can be written as \begin{equation} \f{1}{2}\,\mathcal{T}^{\mu\nu}\,\pd{g_{\mu\nu}}{j} =\f{1}{\alpha}\,\big[\,\mathcal{F}_{i}\,\pd{\beta^{i}}{j}+\f{1}{2}\,\alpha\,\mathcal{S}^{ik}\,\pd{\gamma_{ik}}{j}-\mathcal{E}\,\pd{\alpha}{j}\,\big], \end{equation} which also appears on the right-hand side of Eq.~\eqref{eq:spectralEulerianMomentumEquation_3p1}. Similarly, the third term on the right-hand side of Eq.~\eqref{eq:spectralLagrangianMomentumEquationGeometrySource_3p1} can be written as \begin{align} u_{\mu}\mathcal{T}^{\mu\nu}\,\pd{}{\nu}\big(\,Wv_{j}\,\big) &=-\f{W}{\alpha}\,\Big\{\,\mathcal{E}-v^{k}\,\mathcal{F}_{k}\,\Big\}\,\pd{}{t}\big(Wv_{j}\big) \\ &\hspace{12pt} -\f{W}{\alpha}\, \Big\{\, \big(\,\alpha\,\mathcal{F}^{i}-\beta^{i}\,\mathcal{E}\,\big) -v^{k}\,\big(\,\alpha\mathcal{S}^{i}_{\hspace{4pt}k}-\beta^{i}\,\mathcal{F}_{k}\,\big) \,\Big\}\,\pd{}{i}\big(Wv_{j}\big), \nonumber \end{align} while the second term on the right-hand side of Eq.~\eqref{eq:spectralLagrangianMomentumEquationGeometrySource_3p1} contains the expression in Eq.~\eqref{eq:observerCorrectionsLagrangianEnergyEquation_3p1}. Finally, the expression inside the energy derivative term on the left-hand side of Eq.~\eqref{eq:spectralLagrangianMomentumEquation_3p1} can be written as \begin{align} &\f{h_{j\mu}}{\varepsilon}\,\mathcal{Q}^{\mu\nu\rho}\,\nabla_{\nu}u_{\rho} \nonumber \\ &=\big(\,\f{1}{\varepsilon}\,\mathcal{Q}_{j}^{\hspace{4pt}\nu\rho}-Wv_{j}\,\mathcal{T}^{\nu\rho}\,\big)\,\nabla_{\nu}u_{\rho} \nonumber \\ &= \Big\{\, \big(\mathcal{Y}_{j}-Wv_{j}\,\mathcal{E}\big)\,n^{\nu}\,n^{\rho} +\big(\mathcal{Z}_{j}^{\hspace{4pt}\nu}-Wv_{j}\,\mathcal{F}^{\nu}\big)\,n^{\rho} \nonumber \\ &\hspace{24pt} +\big(\mathcal{Z}_{j}^{\hspace{4pt}\rho}-Wv_{j}\mathcal{F}^{\rho}\big)\,n^{\nu} +\big(\mathcal{W}_{j}^{\hspace{4pt}\nu\rho}-Wv_{j}\mathcal{S}^{\nu\rho}\big)\,\Big\}\,\nabla_{\nu}u_{\rho}, \label{eq:observerCorrectionsLagrangianMomentumEquation_3p1} \end{align} which is a contraction of Eulerian decompositions of rank two tensors, with components $\big(\mathcal{Y}_{j}-Wv_{j}\,\mathcal{E}\big)$, $\big(\mathcal{Z}_{j}^{\hspace{4pt}\nu}-Wv_{j}\,\mathcal{F}^{\nu}\big)$, and $\big(\mathcal{W}_{j}^{\hspace{4pt}\nu\rho}-Wv_{j}\mathcal{S}^{\nu\rho}\big)$, contracted with the covariant derivative of the fluid four-velocity, and can be written in a form similar to Eq.~\eqref{eq:observerCorrectionsEulerianMomentumEquation_3p1}. The Lagrangian two-moment model presented here (Eqs.~\eqref{eq:spectralLagrangianEnergyEquation_3p1} and \eqref{eq:spectralLagrangianMomentumEquation_3p1}) is the basis for several codes used to model neutrino transport in core-collapse supernovae: \citet{MuJaDi10} used it in conjunction with the conformal flatness approximation to GR (CFA) and ray-by-ray neutrino transport transport; and \citet{JuObJa15} and \citet{SkDoBu19} used this model in its $\mathcal{O}(v/c)$ limit to develop multi-dimensional neutrino transport codes. \paragraph{Number conservative two-moment model} The number conservative model is yet another formulation of two-moment transport, which evolves the spectral number density as measured by the Eulerian observer (with four-velocity $n_{\mu}$) and the spectral number flux. The equation for the number density is obtained (1) directly from Eq.~\eqref{eq:spectralNumberEquation}, (2) by contraction of Eq.~\eqref{eq:spectralFourMomentumEquation} with $-u_{\mu}/\varepsilon$, or (3) by dividing Eq.~\eqref{eq:spectralLagrangianEnergyEquation_3p1} by $\varepsilon$. In $3+1$ form it is given by \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\,\pd{}{t}\big(\,\sqrt{\gamma}\,\big[\,W\mathcal{D}+v^{i}\mathcal{I}_{i}\,\big]\,\big) +\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,\mathcal{I}^{i}+\big(\,\alpha\,v^{i}-\beta^{i}\,\big)\,W\mathcal{D}\,\big]\,\big)\,\big] \nonumber \\ &\hspace{12pt} -\f{1}{\varepsilon^{2}}\pderiv{}{\varepsilon}\big(\,\varepsilon^{2}\,\mathcal{T}^{\mu\nu}\,\nabla_{\mu}u_{\nu}\,\big) =\f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}(f)\,\f{d\omega}{\varepsilon}, \label{eq:spectralNumberEquation_3p1} \end{align} where the expression inside the energy derivative (last term on the left-hand side) is given by Eq.~\eqref{eq:observerCorrectionsLagrangianEnergyEquation_3p1}. Eq.~\eqref{eq:spectralNumberEquation_3p1} is conservative in the sense that an integration over energy space gives the balance equation Eq.~\eqref{eq:numberEquation}, which in $3+1$ form is given by \begin{equation} \f{1}{\alpha\sqrt{\gamma}} \big[\,\pd{}{t}\big(\,\sqrt{\gamma}\,N\,\big) +\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,G^{i}-\beta^{i}\,N\,\big]\,\big)\,\big] =\int_{V_{p}}\mathcal{C}[f]\,\pi_{m}, \label{eq:numberEquation_3p1} \end{equation} expressing exact particle conservation in the absence of particle-converting neutrino--matter interactions (e.g., emission and absorption). The equation for the number flux density is obtained by contraction of $h_{j\mu}/\varepsilon$ with Eq.~\eqref{eq:spectralFourMomentumEquation} (or by dividing Eq.~\eqref{eq:spectralLagrangianMomentumEquation_3p1} by $\varepsilon$): \begin{align} &\f{1}{\alpha\sqrt{\gamma}} \big[\,\pd{}{t}\big(\,\sqrt{\gamma}\,\big[\,W\mathcal{I}_{j}+v^{i}\widehat{\mathcal{K}}_{ij}\,\big]\,\big) +\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,\widehat{\mathcal{K}}^{i}_{\hspace{4pt}j}+\big(\,\alpha\,v^{i}-\beta^{i}\,\big)\,W\mathcal{I}_{j}\,\big]\,\big)\,\big] \nonumber \\ &\hspace{6pt} -\f{1}{\varepsilon^{2}}\pderiv{}{\varepsilon}\Big(\,\varepsilon^{2}\,h_{j\mu}\,\widehat{\mathcal{Q}}^{\mu\nu\rho}\,\nabla_{\nu}u_{\rho}\,\Big) =\f{1}{2}\,\widehat{\mathcal{T}}^{\mu\nu}\,\pd{g_{\mu\nu}}{j} +\f{1}{\varepsilon}\,\widehat{\mathcal{Q}}_{j}^{\hspace{4pt}\mu\nu}\,\nabla_{\nu}u_{\mu} -\mathcal{N}^{\nu}\,\pd{}{\nu}u_{j} \nonumber \\ &\hspace{150pt} +\f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}(f)\,\ell_{j}\,\f{d\omega}{\varepsilon}. \label{eq:spectralNumberFluxEquation_3p1} \end{align} Here, we use the ``hat'' to denote previously-defined moments divided by $\varepsilon$; e.g., \begin{equation} \big\{\,\widehat{\mathcal{T}}^{\mu\nu},\,\widehat{\mathcal{Q}}^{\mu\nu\rho}\,\big\} = \f{1}{\varepsilon}\big\{\,\mathcal{T}^{\mu\nu},\,\mathcal{Q}^{\mu\nu\rho}\,\big\}. \end{equation} The expression in the energy derivative in Eq.~\eqref{eq:spectralNumberFluxEquation_3p1} is given by Eq.~\eqref{eq:observerCorrectionsLagrangianMomentumEquation_3p1}. The first term on the right-hand side of Eq.~\eqref{eq:spectralNumberFluxEquation_3p1} can be written as (cf.\ Eq.~\eqref{eq:spectralLagrangianMomentumEquation_3p1}) \begin{equation} \f{1}{2}\,\widehat{\mathcal{T}}^{\mu\nu}\,\pd{g_{\mu\nu}}{j} =\f{1}{\alpha}\,\Big\{\,\widehat{\mathcal{F}}_{i}\,\pd{\beta^{i}}{j}+\f{1}{2}\,\alpha\,\widehat{\mathcal{S}}^{ik}\,\pd{\gamma_{ik}}{j}-\widehat{\mathcal{E}}\,\pd{\alpha}{j}\,\Big\}, \end{equation} while the third term on the right-hand side of Eq.~\eqref{eq:spectralNumberFluxEquation_3p1} can be written as \begin{equation} \mathcal{N}^{\nu}\pd{}{\nu}u_{j} =\f{1}{\alpha}\,\Big\{\,\mathcal{N}\,\pd{}{t}\,\big(Wv_{j}\big)+\big(\alpha\,\mathcal{G}^{i}-\beta^{i}\mathcal{N}\big)\,\pd{}{i}\big(Wv_{j}\big)\,\Big\}, \end{equation} where $\mathcal{N}$ and $\mathcal{G}^{i}$ are written in terms of Lagrangian moments in Eqs.~\eqref{eq:eulerianNumberInTermsOfLagrangianMoments} and \eqref{eq:eulerianNumberFluxInTermsOfLagrangianMoments}. The second term on the right-hand side of Eq.~\eqref{eq:spectralNumberFluxEquation_3p1} can be written as \begin{align} \f{1}{\varepsilon}\,\widehat{\mathcal{Q}}_{j}^{\hspace{4pt}\mu\nu}\,\nabla_{\nu}u_{\mu} &= \Big\{\, \widehat{\mathcal{Y}}_{j}\,n^{\mu}\,n^{\nu} +\widehat{\mathcal{Z}}_{j}^{\hspace{4pt}\mu}\,n^{\nu} +n^{\mu}\,\widehat{\mathcal{Z}}_{j}^{\hspace{4pt}\nu} +\widehat{\mathcal{W}}_{j}^{\hspace{4pt}\mu\nu} \,\Big\}\,\nabla_{\nu}u_{\mu}, \end{align} which is in the same form as Eq.~\eqref{eq:observerCorrectionsLagrangianEnergyEquation_3p1}, but where $\widehat{\mathcal{Y}}_{j}$, $\widehat{\mathcal{Z}}_{j}^{\hspace{4pt}\mu}$, and $\widehat{\mathcal{W}}_{j}^{\hspace{4pt}\mu\nu}$, replace $\mathcal{E}$, $\mathcal{F}^{\mu}$, and $\mathcal{S}^{\mu\nu}$, respectively. This number conservative two-moment model was presented in spherical symmetry, assuming the conformal flatness approximation (CFA) to general relativity, by \citet{MuJaDi10}, and was also presented in the $\mathcal{O}(v/c)$ limit by \citet{JuObJa15}, but it was not explicitly used in the numerical techniques developed by either of these authors. The model presented here is the 3+1 general relativistic version of that model, without approximation. It should also be mentioned that \citet{RaJa02} developed a two-moment, variable Eddington factor method based on solving both the Lagrangian two-moment model and the number conservative two-moment model simultaneously, in spherical symmetry and in the $\mathcal{O}(v/c)$ limit, treating the radiation energy density, momentum density, number density and number flux density as independent variables. However, resulting from inconsistency between the energy and number equations, in this approach the mean energy in an energy group, $\mathcal{J}/\mathcal{D}$, is not constrained to the group boundaries, and can even move outside the group \citep{MuJaDi10}. \subsubsection{The closure problem} \label{sec:closure} The two-moment models discussed above are not closed. The rank-two tensor $\mathcal{K}^{\mu\nu}$ defined in Eq.~\eqref{eq:energyMomentsLagrangian} and the rank-three tensor $\mathcal{L}^{\mu\nu\rho}$ defined in Eq.~\eqref{eq:heatFluxMomentsLagrangian} are present in various terms in the two-moment model: components of $\mathcal{K}^{\mu\nu}$ are present in spacetime derivative terms, while components of $\mathcal{K}^{\mu\nu}$ and $\mathcal{L}^{\mu\nu\rho}$ are present in energy derivative terms and source terms. These tensor components must be expressed in terms of the evolved moments to close the system of equations. For the Eulerian and the Lagrangian two-moment models, the evolved quantities are ultimately the energy density and momentum density measured by a comoving observer; $\mathcal{J}$ and $\mathcal{H}_{j}$, respectively. (For the number conservative two-moment model, the evolved quantities are the number density and number flux density measured by a comoving observer; $\mathcal{D}$ and $\mathcal{I}_{j}$, respectively.) Following \citet{Le84,AnPeSa92}, the general symmetric, rank-two tensor $\mathcal{K}^{\mu\nu}$, depending on $\mathcal{J}$ and $\mathcal{H}^{\mu}$, that is orthogonal to the fluid four-velocity $u_{\mu}$ and that satisfies the trace condition $\mathcal{K}^{\mu}_{\hspace{6pt}\mu}=\mathcal{J}$ takes the form \begin{equation} \mathcal{K}^{\mu\nu} =\f{1}{2}\,\Big[\,\big(\,1-\mathfrak{k}\,\big)\,h^{\mu\nu}+\big(\,3\,\mathfrak{k}-1\,\big)\,\widehat{\mathsf{h}}^{\mu}\,\widehat{\mathsf{h}}^{\nu}\,\Big]\,\mathcal{J}, \label{eq:radiationStressTensor} \end{equation} where $\mathfrak{k}(\mathcal{J},\mathfrak{h})$ is the Eddington factor, $\mathfrak{h}=\mathcal{H}/\mathcal{J}$ is the flux factor, $\mathcal{H}=\sqrt{\mathcal{H}_{\mu}\mathcal{H}^{\mu}}$, and $\widehat{\mathsf{h}}^{\mu}=\mathcal{H}^{\mu}/\mathcal{H}$ is a unit four-vector parallel to $\mathcal{H}^{\mu}$. It is straightforward to show that the Eddington factor can be written as \begin{equation} \mathfrak{k}=\f{\widehat{\mathsf{h}}_{\mu}\widehat{\mathsf{h}}_{\nu}\,\mathcal{K}^{\mu\nu}}{\mathcal{J}} =\f{\f{1}{4\pi}\int_{\mathbb{S}^{2}}f(\omega)\,(\widehat{\mathsf{h}}_{\mu}\ell^{\mu})^{2}\,d\omega}{\f{1}{4\pi}\int_{\mathbb{S}^{2}}f(\omega)\,d\omega} =\f{\f{1}{2}\int_{-1}^{1}\mathfrak{f}(\mu)\,\mu^{2}\,d\mu}{\f{1}{2}\int_{-1}^{1}\mathfrak{f}(\mu)\,d\mu}, \label{eq:eddingtonFactor} \end{equation} where we have defined \begin{equation} \mathfrak{f}(\mu)=\f{1}{2\pi}\int_{0}^{2\pi}f(\mu,\varphi)\,d\varphi. \end{equation} In the last step in Eq.~\eqref{eq:eddingtonFactor} we have aligned the momentum-space coordinate system in the comoving frame so that $\widehat{\mathsf{h}}_{\mu}\ell^{\mu}=\widehat{\mathsf{h}}_{\hat{\mu}}\ell^{\hat{\mu}}=\cos\vartheta=\mu$. (Note, this is not the same $\mu$ that will be defined later, in Sect.~\ref{sec:PhaseSpaceCoordinates}. The angle here is defined in terms of the direction specified by $\widehat{\mathsf{h}}_{\hat{\mu}}$, whereas in Sect.~\ref{sec:PhaseSpaceCoordinates} it will be defined in terms of $\hat{r}$.) The two-moment closure for $\mathcal{K}^{\mu\nu}$ requires the Eddington factor to be specified in terms of $\mathcal{J}$ and $\mathfrak{h}$ (or equivalently $\mathcal{D}$ and $\mathfrak{h}$). We will discuss some specific approaches further below. In a similar way, we can construct the third-order moment, $\mathcal{L}^{\mu\nu\rho}$, depending on $\mathcal{J}$ and $\mathcal{H}^{\mu}$, as the symmetric rank-three tensor that is orthogonal to $u_{\mu}$ and that satisfies the trace conditions $\mathcal{L}^{\mu\nu}_{\hspace{12pt}\nu}=\mathcal{H}^{\mu}$. From \citep[e.g.,][]{Pen92,CaEnMe13a,JuObJa15}, \begin{equation} \mathcal{L}^{\mu\nu\rho} =\f{1}{2}\, \Big[\, \big(\,\mathfrak{h}-\mathfrak{q}\,\big)\, \Big(\,\widehat{\mathsf{h}}^{\mu}\,h^{\nu\rho}+\widehat{\mathsf{h}}^{\nu}\,h^{\mu\rho}+\widehat{\mathsf{h}}^{\rho}\,h^{\mu\nu}\,\Big) +\big(\,5\,\mathfrak{q}-3\,\mathfrak{h}\,\big)\,\widehat{\mathsf{h}}^{\mu}\,\widehat{\mathsf{h}}^{\nu}\,\widehat{\mathsf{h}}^{\rho} \,\Big]\,\mathcal{J}, \label{eq:radiationHeatFluxTensor} \end{equation} where we have defined the ``heat flux'' factor $\mathfrak{q}(\mathcal{J},\mathfrak{h})$: \begin{equation} \mathfrak{q} = \f{\widehat{h}_{\mu}\,\widehat{h}_{\nu}\,\widehat{h}_{\rho}\,\mathcal{L}^{\mu\nu\rho}}{\mathcal{J}} =\f{\f{1}{4\pi}\int_{\mathbb{S}^{2}}f(\omega)\,(\widehat{\mathsf{h}}_{\mu}\ell^{\mu})^{3}\,d\omega}{\f{1}{4\pi}\int_{\mathbb{S}^{2}}f(\omega)\,d\omega} =\f{\f{1}{2}\int_{-1}^{1}\mathfrak{f}(\mu)\,\mu^{3}\,d\mu}{\f{1}{2}\int_{-1}^{1}\mathfrak{f}(\mu)\,d\mu}. \label{eq:heatFluxFactor} \end{equation} The two-moment closure for $\mathcal{L}^{\mu\nu\rho}$ requires that we specify the heat flux factor in terms of $\mathcal{J}$ and $\mathfrak{h}$ (or $\mathcal{D}$ and $\mathfrak{h}$). To complete the specification of the two-moment closure, the Eddington and heat flux factors must be specified in terms of the zeroth and first moments. To this end, several approaches have been proposed for the Eddington factor, including maximum entropy closure \citep[e.g.,][]{Mine78}, Kershaw-type closure \citep[e.g.,][]{Kers76}, and closures derived from fits to results obtained with higher-fidelity models \citep[e.g.,][]{Janka91}. In the context of spherically symmetric proto-neutron star models, \citet{MuAbUr17} carried out a comprehensive comparison of results obtained with two-moment neutrino transport, using analytic Eddington factors, to results obtained with Monte Carlo transport calculations. \citet{MuAbUr17} included Eddington factors from \citet{WiCoCo75,Kers76,Le84,Mine78,CeBl94,Janka91,Janka92}, and found no closure to perform consistently better than the others in the test cases considered. Because the maximum entropy closures of \citet{Mine78} and \citet{CeBl94} gave practically identical results and never yielded the worst results, and given the simplicity of the closure by \citet{Mine78} relative to the closure by \citet{CeBl94}, \citet{MuAbUr17} concluded that the \citet{Mine78} closure is the most attractive choice for neutrino transport around proto-neutron stars. The closures provided by \citet{Mine78} and \citet{Le84} are probably the most widely used in core-collapse supernova simulations employing two-moment neutrino transport. Recently, \citet{JuObJa15}, comparing the closures of \citet{Mine78,CeBl94,Le84} in the context of a simulation of collapse and post-bounce evolution of a 13~$M_{\odot}$ star in spherical symmetry, showed that the differences in shock radii, neutrino luminosities, and mean energies are practically indistinguishable. This may be because the closures are very similar for the values of $\mathcal{J}$ and $\mathfrak{h}$ encountered. \citet{ChEnHa19} considered Eddington factors by \citet{Mine78,CeBl94,LaBa11,BaLa17} and found that, under certain conditions, results obtained with closures based on Fermi--Dirac statistics can differ significantly from results obtained with the \citet{Mine78} closure, which is based on Boltzmann statistics. We discuss the closures due to \citet{Mine78}, \citet{Le84}, and \citet{Kers76} in further detail and give explicit expressions for Eddingon and heat flux factors, which are also plotted in Figure~\ref{fig:eddingtonFactors} (see figure caption for details). \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth]{EddingtonAndHeatFluxFactors} \caption{Plot of Eddington factors $\mathfrak{k}$ (solid lines) and heat flux factors $\mathfrak{q}$ (dotted lines) versus flux factor $\mathfrak{h}$ for the closures due to \citet{Mine78} (black), \citet{Le84} (magenta), and \citet{Kers76} (blue).} \label{fig:eddingtonFactors} \end{figure} \paragraph{Maximum entropy closure} The maximum entropy approach to specifying the Eddington and heat flux factors comes from statistical mechanics, and has been used extensively in moment models for radiation transport \citep[e.g.][]{Mine78,CeBl94}. In this approach, the ``most probable'' values of $\mathfrak{k}$ and $\mathfrak{q}$ are determined by finding the distribution function $\mathfrak{f}_{\mbox{\tiny{\sc Me}}}$ that maximizes the entropy functional $s[\mathfrak{f}_{\mbox{\tiny{\sc Me}}}]$, subject to the constraints that $\mathfrak{f}_{\mbox{\tiny{\sc Me}}}$ reproduces the known moments (e.g., $\mathcal{D}$ and $\mathfrak{h}$). The unknowns can then be computed from Eqs.~\eqref{eq:eddingtonFactor} and \eqref{eq:heatFluxFactor} by setting $\mathfrak{f}=\mathfrak{f}_{\mbox{\tiny{\sc Me}}}$. For the two-moment model, the maximum entropy distribution is obtained by extremizing \begin{equation} S = \int_{-1}^{1}s[\mathfrak{f}_{\mbox{\tiny{\sc Me}}}]\,d\mu + \alpha_{0}\int_{-1}^{1}\mathfrak{f}_{\mbox{\tiny{\sc Me}}}\,d\mu + \alpha_{1}\int_{-1}^{1}\mathfrak{f}_{\mbox{\tiny{\sc Me}}}\,\mu\,d\mu \label{eq:maximumEntropyObjectiveFunction} \end{equation} with respect to $\mathfrak{f}_{\mbox{\tiny{\sc Me}}}$, where the Lagrange multipliers $\alpha_{0}$ and $\alpha_{1}$ are introduced for the constraints. A particularly simple closure is obtained by considering the case of Boltzmann statistics, where $s[\mathfrak{f}_{\mbox{\tiny{\sc Me}}}]=\mathfrak{f}_{\mbox{\tiny{\sc Me}}}\ln\mathfrak{f}_{\mbox{\tiny{\sc Me}}}-\mathfrak{f}_{\mbox{\tiny{\sc Me}}}$. This case was considered in detail by \citet{Mine78}, and is the low-occupancy limit ($\mathcal{D}\ll1$) of the more appropriate case (for neutrino transport) of Fermi--Dirac statistics considered by \citet{CeBl94}. For the case of Boltzmann statistics, the maximum entropy distribution is easily found to be given by \begin{equation} \mathfrak{f}_{\mbox{\tiny{\sc Me}}}(\mu) = \exp\big(\,\alpha_{0}+\alpha_{1}\,\mu\,\big), \label{eq:maximumEntropyDistribution} \end{equation} where $\alpha_{0}$ and $\alpha_{1}$ are found from the known moments. Direct integration of Eq.~\eqref{eq:maximumEntropyDistribution} gives \citep{Mine78} \begin{align} \mathcal{D} = \f{1}{2}\int_{-1}^{1}\mathfrak{f}_{\mbox{\tiny{\sc Me}}}(\mu)\,d\mu &=e^{\alpha_{0}}\,\sinh(\alpha_{1})/\alpha_{1}, \label{eq:numberDensityME} \\ \mathcal{I} = \f{1}{2}\int_{-1}^{1}\mathfrak{f}_{\mbox{\tiny{\sc Me}}}(\mu)\,\mu\,d\mu &=e^{\alpha_{0}}\,\big(\,\alpha_{1}\,\cosh(\alpha_{1})-\sinh(\alpha_{1})\,\big)/\alpha_{1}^{2}, \label{eq:numberFluxME} \end{align} which can be solved for $\alpha_{0}$ and $\alpha_{1}$. In particular, the flux factor is given by the Langevin function $L(\alpha_{1})$, \begin{equation} \mathfrak{h} =\mathcal{I}/\mathcal{D} =\coth(\alpha_{1})-1/\alpha_{1} \equiv L(\alpha_{1}), \end{equation} and is independent of $\alpha_{0}$. Thus, $\alpha_{1}(\mathfrak{h})=L^{-1}(\mathfrak{h})$. (The inversion of the Langevin function must be done numerically.) Once $\alpha_{1}$ is obtained, $\alpha_{0}$ can be obtained directly from either Eq.~\eqref{eq:numberDensityME} or Eq.~\eqref{eq:numberFluxME}, which completes the specification of $\mathfrak{f}_{\mbox{\tiny{\sc Me}}}$. Then the Eddington factor and heat flux factor can be computed by direct integration \begin{equation} \mathfrak{k}(\mathfrak{h}) = 1-2\,\mathfrak{h}/\alpha_{1}(\mathfrak{h}) \quad\mbox{and}\quad \mathfrak{q}(\mathfrak{h}) = \coth\big(\alpha_{1}(\mathfrak{h})\big) - 3\,\mathfrak{k}(\mathfrak{h})/\alpha_{1}(\mathfrak{h}), \label{eq:eddingtonAndHeatFluxFactorsME} \end{equation} which closes the two-moment model under the simplifying assumption of Boltzmann statistics, which is a reasonable approximation for neutrinos only when the occupation density is low; i.e., when $\mathcal{D}\ll1$. This closure is referred to as the Minerbo closure, and is a commonly used closure in simulations employing spectral two-moment neutrino transport \citep[e.g.,][]{KuTaKo16,JuBoJa18,OcCo18}. In practice, to avoid inverting the Langevin function for $\alpha_{1}$, the Eddington and heat flux factors can be approximated as polynomials in the flux factor. This leads to algebraic expressions, which are computationally more efficient. The algebraic form of the Eddingon factor, which approximates the one in Eq.~\eqref{eq:eddingtonAndHeatFluxFactorsME} to better than one percent, is given by \citep{CeBl94} \begin{equation} \mathfrak{k}_{\mbox{\tiny Alg}}(\mathfrak{h}) =\f{1}{3} + \f{2}{15}\,\big(\,3\,\mathfrak{h}^{2} - \mathfrak{h}^{3} + 3\,\mathfrak{h}^{4}\,\big). \label{eq:eddingtonFactorMinerbo} \end{equation} Similarly, the algebraic form of the heat flux factor, which approximates the one in Eq.~\eqref{eq:eddingtonAndHeatFluxFactorsME} to within a few percent, is given by \citep{JuObJa15} \begin{equation} \mathfrak{q}_{\mbox{\tiny Alg}}(\mathfrak{h}) =\mathfrak{h}\,\big(\,45 + 10\,\mathfrak{h} - 12\,\mathfrak{h}^{2} - 12\,\mathfrak{h}^{3} + 38\,\mathfrak{h}^{4} - 12\,\mathfrak{h}^{5} + 18\,\mathfrak{h}^{6}\,\big) / 75. \end{equation} In Figure~\ref{fig:eddingtonFactors}, the Eddington and heat flux factors $\mathfrak{k}_{\mbox{\tiny Alg}}$ and $\mathfrak{q}_{\mbox{\tiny Alg}}$ are plotted versus the flux factor $\mathfrak{h}$ (denoted Minerbo in the legend, using solid and dotted black lines, respectively). Another two-moment closure based on the maximum entropy principle is the so-called M1 closure \cite[e.g.,][]{Le84,DuFu99}. The M1 closure is thus based on the same principle as the Minerbo closure, but a different entropy functional is considered; namely the entropy functional for Bose--Einstein statistics $s[\mathfrak{f}_{\mbox{\tiny{\sc Me}}}]=(1+\mathfrak{f}_{\mbox{\tiny{\sc Me}}})\ln(1+\mathfrak{f}_{\mbox{\tiny{\sc Me}}})-\mathfrak{f}_{\mbox{\tiny{\sc Me}}}\,\ln\mathfrak{f}_{\mbox{\tiny{\sc Me}}}$. For the M1 closure the Eddington factor is given by \begin{equation} \mathfrak{k}_{\mbox{\tiny M1}}(\mathfrak{h}) = \f{3+4\,\mathfrak{h}^{2}}{5+2\sqrt{4-3\,\mathfrak{h}^{2}}}. \label{eq:eddingtonFactorM1} \end{equation} It should be noted that \citet{Le84} derived this result without the maximum entropy principle. More recently, \citet{VaAuDu11} proposed a numerical method for multi-group radiation hydrodynamics in the $\mathcal{O}(v/c)$ limit, and provided an expression for the heat flux factor in the M1 model: \begin{equation} \mathfrak{q}_{\mbox{\tiny M1}}(\mathfrak{h}) = 3\,\varphi_{1}(\mathfrak{h})\,\mathfrak{h} + \varphi_{2}(\mathfrak{h})\,\mathfrak{h}^{3}, \label{eq:heatfluxFactorM1} \end{equation} where \begin{align} \varphi_{1}(\mathfrak{h}) &=\f{(\mathfrak{h}-2+a)(\mathfrak{h}+2-a)}{4\mathfrak{h}(a-2)^{5}} \Big[\, 12\ln\Big(\f{\mathfrak{h}-2+a}{\mathfrak{h}+2-a}\Big)\big(\mathfrak{h}^{4}+2a\mathfrak{h}^{2}-7\mathfrak{h}^{2}-4a+8\big) \nonumber \\ &\hspace{108pt} +48\mathfrak{h}^{3}-9a\mathfrak{h}^{3}-80\mathfrak{h}+40a\mathfrak{h} \,\Big], \label{eq:phi1M1} \\ \varphi_{2}(\mathfrak{h}) &=\f{1}{\mathfrak{h}^{3}(a-2)^{5}} \Big[\, 60\ln\Big(\f{\mathfrak{h}-2+a}{\mathfrak{h}+2-a}\Big)\big(-\mathfrak{h}^{6}+15\mathfrak{h}^{4}-3a\mathfrak{h}^{4}+15a\mathfrak{h}^{2}-42\mathfrak{h}^{2}-16a+32\big) \nonumber \\ &\hspace{50pt} +54a\mathfrak{h}^{5}-465\mathfrak{h}^{5}-674a\mathfrak{h}^{3}+2140\mathfrak{h}^{3}+1056a\mathfrak{h}-2112\mathfrak{h} \,\Big], \label{eq:phi1M2} \end{align} and $a=\sqrt{4-3\mathfrak{h}^{2}}$. The M1 closure is another commonly used closure in simulations employing spectral two-moment neutrino transport \citep[e.g.,][]{SkDoBu19}. In Figure~\ref{fig:eddingtonFactors}, the Eddington and heat flux factors $\mathfrak{k}_{\mbox{\tiny M1}}$ and $\mathfrak{q}_{\mbox{\tiny M1}}$ are plotted versus the flux factor $\mathfrak{h}$ (denoted ``Levermore'' in the legend, using solid and dotted magenta lines, respectively). When plotting the heat flux factor, we found $\varphi_{1}$ and $\varphi_{2}$ to exhibit oscillatory behavior as $\mathfrak{h}\to0$. To avoid these oscillations in $\mathfrak{q}_{\mbox{\tiny M1}}$, we used Taylor expansions of $\varphi_{1}$ (around $\mathfrak{h}=0.1$) and $\varphi_{2}$ (around $\mathfrak{h}=0.2$) to plot $\mathfrak{q}_{\mbox{\tiny M1}}$ for smaller values of $\mathfrak{h}$. The low occupancy assumption used for the Minerbo closure does not hold everywhere in a supernova simulation, but may be a reasonable approximation in the neutrino heating region. The M1 closure based on Bose--Einstein statistics is also not a good approximation when the phase space occupation is high. In this case, a more realistic treatment for neutrinos must consider the entropy functional for Fermi--Dirac statistics, where $s[\mathfrak{f}_{\mbox{\tiny{\sc Me}}}]=\mathfrak{f}_{\mbox{\tiny{\sc Me}}}\,\ln\mathfrak{f}_{\mbox{\tiny{\sc Me}}}+(1-\mathfrak{f}_{\mbox{\tiny{\sc Me}}})\ln(1-\mathfrak{f}_{\mbox{\tiny{\sc Me}}})$, and follow the procedure as outlined above, as was done by \citet{CeBl94}, and more recently in further detail by \citet{LaBa11}. For the maximum entropy closure derived by \citet{CeBl94}, the Eddington factor is \begin{equation} \mathfrak{k}_{\mbox{\tiny CB}}(\mathcal{D},\mathfrak{h}) = \f{1}{3} + \f{2\,(1-\mathcal{D})\,(1-2\mathcal{D})}{3}\,\Theta\Big(\f{\mathfrak{h}}{1-\mathcal{D}}\Big), \label{eq:eddingtonFactorCB} \end{equation} where $\Theta(x)=x^{2}(3-x+3x^{2})/5$. To account for Fermi--Dirac statistics, the Eddington factor in Eq.~\eqref{eq:eddingtonFactorCB} depends on both the number density $\mathcal{D}$ and the flux factor $\mathfrak{h}$. In the low-occupancy limit when, $\mathcal{D}\ll1$, this Eddington factor reduces to the Eddington factor due to Minerbo in Eq.~\eqref{eq:eddingtonFactorMinerbo}. \citet{CeBl94} did not provide an expression for the heat flux factor. It should be noted that the term ``M1 closure,'' used here to refer to the closure in Eqs.~\eqref{eq:eddingtonFactorM1} and \eqref{eq:heatfluxFactorM1}, derives from the more general term ``M$N$ closure,'' which is used in transport theory to refer to maximum entropy closures applied to $N$-moment hierarchies. As such, all the closures discussed in this section are M1 closures, but they differ in the entropy functional that is maximized. \paragraph{Kershaw closure} A different approach to the closure problem was proposed by \citet{Kers76}. The key idea behind the Kershaw closure is to consider the bounds on the moments generated by the underlying distribution function. For a nonnegative distribution function ($\mathfrak{f}\ge0$), the set generated by the normalized moments $\{\,1,\,\mathfrak{h},\,\mathfrak{k},\,\mathfrak{q}\,\}$ is convex and bounded, which in turn allows one to construct any sequence of moments in this set by a convex combination of moment vectors on the boundary of this domain. The moments constructed by this procedure are then ``good'' in the sense that they can be obtained from a nonnegative distribution function. For the two-moment model, the Kershaw closure procedure can be used to specify $\mathfrak{k}$ and $\mathfrak{q}$ in terms of $\mathfrak{h}$. For $\mathfrak{f}\ge0$, it is straightforward to show that $-1\le\mathfrak{h}\le1$, while the bounds on the Eddington factor are given by \begin{equation} \mathfrak{h}^{2}\equiv\mathfrak{k}_{\textnormal{\tiny\textsc{L}}}(\mathfrak{h})\le\mathfrak{k}\le\mathfrak{k}_{\textnormal{\tiny\textsc{H}}}(\mathfrak{h})\equiv1. \end{equation} For $\zeta\in[0,1]$, the Eddington factor can be written as the convex combination \begin{equation} \mathfrak{k}_{\mbox{\tiny{\sc K}}}(\mathfrak{h}) = \zeta\,\mathfrak{k}_{\textnormal{\tiny\textsc{L}}}(\mathfrak{h})+(1-\zeta)\,\mathfrak{k}_{\textnormal{\tiny\textsc{H}}}(\mathfrak{h}). \end{equation} Demanding that this expression be correct in the limit when $\mathfrak{h}=0$, i.e., $\mathfrak{k}(0)=1/3$, gives $\zeta=2/3$, so that \begin{equation} \mathfrak{k}_{\mbox{\tiny{\sc K}}}(\mathfrak{h}) = \f{1}{3} + \f{2}{3}\,\mathfrak{h}^{2}. \label{eq:eddingtonFactorKershaw} \end{equation} Similarly, for the heat flux factor, it can be shown that the following bounds hold \citep[e.g.,][]{Schn16}: \begin{align} -\mathfrak{k}+\f{(\mathfrak{h}+\mathfrak{k})^{2}}{1+\mathfrak{h}} \equiv\mathfrak{q}_{\textnormal{\tiny\textsc{L}}}(\mathfrak{h},\mathfrak{k})\le\mathfrak{q}\le\mathfrak{q}_{\textnormal{\tiny\textsc{H}}}(\mathfrak{h},\mathfrak{k})\equiv \mathfrak{k}-\f{(\mathfrak{h}-\mathfrak{k})^{2}}{1-\mathfrak{h}}. \end{align} Constructing the heat flux factor from a convex combination of these bounds, and using $\mathfrak{k}_{\mbox{\tiny{\sc K}}}(\mathfrak{h})$, gives \begin{equation} \mathfrak{q}_{\mbox{\tiny{\sc K}}}(\mathfrak{h}) =\zeta\,\mathfrak{q}_{\textnormal{\tiny\textsc{L}}}(\mathfrak{h},\mathfrak{k}_{\mbox{\tiny{\sc K}}}(\mathfrak{h})) +(1-\zeta)\,\mathfrak{q}_{\textnormal{\tiny\textsc{H}}}(\mathfrak{h},\mathfrak{k}_{\mbox{\tiny{\sc K}}}(\mathfrak{h})). \end{equation} Demanding that $\mathfrak{q}_{\mbox{\tiny{\sc K}}}(0)=0$ (isotropic limit) gives $\zeta=1/2$, so that \begin{equation} \mathfrak{q}_{\mbox{\tiny{\sc K}}}(\mathfrak{h}) =\f{\mathfrak{h}\,\big(\,\mathfrak{h}^{2}+\mathfrak{k}_{\mbox{\tiny{\sc K}}}(\mathfrak{h})^{2}-2\,\mathfrak{k}_{\mbox{\tiny{\sc K}}}(\mathfrak{h})\,\big)}{(\mathfrak{h}^{2}-1)}. \label{eq:heatFluxFactorKershaw} \end{equation} In Fig.~\ref{fig:eddingtonFactors}, the Eddington and heat flux factors $\mathfrak{k}_{\mbox{\tiny{\sc K}}}$ and $\mathfrak{q}_{\mbox{\tiny{\sc K}}}$ are plotted versus the flux factor $\mathfrak{h}$ (denoted ``Kershaw'' in the legend; solid and dotted blue lines, respectively). The Kershaw closure considered here only assumes $\mathfrak{f}\ge0$, which holds for Bose--Einstein and Boltzmann statistics. Kershaw-type closures for Fermi--Dirac statistics, which is appropriate for neutrinos where $\mathfrak{f}\in[0,1]$, was recently considered by \citet{BaLa17}. \subsubsection{One-moment kinetics} \label{sec:oneMomentKinetics} One-moment models (commonly referred to as flux-limited diffusion models \citep{LePo81}) are among the earliest transport models adopted for neutrino transport in core-collapse supernova simulations \citep{Bruenn1975}, and are still in use today \citep[e.g.,][]{BrBlHi20,RaJuJa19}. Essentially, one-moment models evolve only the zeroth moment of the distribution function, while higher-order moments are specified through a closure procedure. Specifically, the radiation flux is specified in terms of the zeroth moment in a way that it is correct in both the diffusion and streaming regimes. In order to be correct in the streaming regime, a flux limiter is applied to transition the model from parabolic (diffusion) to hyperbolic (streaming). Here we consider the 3+1 general relativistic formulation presented by \citet{RaJuJa19}, which was derived using the formalisms from \citet{ShKiSe11,EnCaMe12c,CaEnMe13a}. We start from a slightly different perspective, since we already have presented the main evolution equation in Eq.~\eqref{eq:spectralLagrangianEnergyEquation_3p1}. \citet{RaJuJa19} define their angular moments with an additional factor of $\varepsilon^{2}$ relative to our definitions in Eq.~\eqref{eq:energyMomentsLagrangian} and absorb $\sqrt{\gamma}$ into the variables; hence, we make the following definitions \begin{equation} \big\{\,\hat{\mathcal{J}},\,\hat{\mathcal{H}}^{\mu},\,\hat{\mathcal{T}}^{\mu\nu},\ldots\big\} =\sqrt{\gamma}\,\varepsilon^{2}\,\big\{\,\mathcal{J},\,\mathcal{H}^{\mu},\,\mathcal{T}^{\mu\nu},\ldots\big\}; \end{equation} i.e., similar definitions hold for other moments appearing in the equations. (They also do not normalize their moments by the factor of $4\pi$, but that should not cause confusion in the presentation here.) We can then write Eq.~\eqref{eq:spectralLagrangianEnergyEquation_3p1} as \begin{align} &\f{1}{\alpha} \big[\,\pd{}{t}\big(\,\big[\,W\hat{\mathcal{J}}+v^{i}\hat{\mathcal{H}}_{i}\,\big]\,\big) +\pd{}{i}\big(\,\big[\,\alpha\,\hat{\mathcal{H}}^{i}+\big(\,\alpha\,v^{i}-\beta^{i}\,\big)\,W\hat{\mathcal{J}}\,\big]\,\big)\,\big] \nonumber \\ &\hspace{6pt} +\hat{\mathcal{R}}_{\varepsilon} - \pd{}{\varepsilon}\big(\,\varepsilon\,\hat{\mathcal{R}}_{\varepsilon}\,\big) =\f{1}{4\pi}\int_{\mathbb{S}^{2}}\hat{\mathcal{C}}(f)\,d\omega, \label{eq:spectralLagrangianEnergyEquationFLD_3p1} \end{align} where we have defined \begin{align} \hat{\mathcal{R}}_{\varepsilon} &= \hat{\mathcal{T}}^{\mu\nu}\nabla_{\mu}u_{\nu} \nonumber \\ &=W\,\Big[\,\hat{\mathcal{F}}_{k}\,\pd{v^{k}}{\tau} + \hat{\mathcal{S}}_{k}^{\hspace{4pt}i}\pd{v^{k}}{i} + \big(\hat{\mathcal{F}}^{i}-\hat{\mathcal{E}}v^{i}\big)\,\pd{\ln\alpha}{i} + \alpha^{-1}\hat{\mathcal{F}}_{k}v^{i}\pd{\beta^{k}}{i} \label{eq:observerCorrectionsLagrangianEnergyEquationFLD_3p1} \\ &\hspace{24pt} +\hat{\mathcal{S}}^{ik}\big(\,\f{1}{2}v^{m}\pd{\gamma_{ik}}{m}-\mathsf{K}_{ik}\,\big)\,\Big] -\big(\,\hat{\mathcal{E}}-v^{k}\hat{\mathcal{F}}_{k}\,\big)\,\pd{W}{\tau} - \big(\,\hat{\mathcal{F}}^{i}-\hat{\mathcal{S}}_{k}^{\hspace{4pt}i}v^{k}\,\big)\,\pd{W}{i}, \nonumber \end{align} where in the second step, we used Eq.~\eqref{eq:observerCorrectionsLagrangianEnergyEquation_3p1}, re-expressed in the form given by \citet{RaJuJa19} (cf.\ their Eq.~(A14)), and we have defined $\pd{}{\tau}=n^{\mu}\pd{}{\mu}$. \citet{RaJuJa19} solve for moments defined in an orthonormal comoving frame, and write \begin{align} \hat{\mathcal{H}}^{i} &=L^{i}_{\hspace{2pt}\hat{\mu}}\hat{\mathcal{H}}^{\hat{\mu}} = e^{i}_{\hspace{2pt}\bar{\mu}}\,\Lambda^{\bar{\mu}}_{\hspace{4pt}\hat{\mu}}\hat{\mathcal{H}}^{\hat{\mu}} \nonumber \\ &=e^{i}_{\hspace{2pt}\hat{i}}\,\hat{\mathcal{H}}^{\hat{i}} + W\,\Big(\,\f{W}{W+1}\,v^{i}-\f{\beta^{i}}{\alpha}\,\Big)\,\bar{v}_{\hat{i}}\,\hat{\mathcal{H}}^{\hat{i}}, \label{eq:coordinateBasisInTermsOfComovingH} \end{align} where \begin{equation} \hat{\mathcal{H}}^{\hat{i}}(\varepsilon)=\sqrt{\gamma}\,\f{\varepsilon^{3}}{4\pi}\int_{\mathbb{S}}f(\omega,\varepsilon)\,\ell^{\hat{i}}(\omega)\,d\omega. \end{equation} (Similar expressions can be made for higher-order moments; see \citet{EnCaMe12c,RaJuJa19}.) In Eq.~\eqref{eq:coordinateBasisInTermsOfComovingH}, we remind the reader that $\Lambda^{\bar{\mu}}_{\hspace{4pt}\hat{\mu}}$ is the Lorentz transformation between the orthonormal comoving frame basis and the orthonormal laboratory frame basis, while $e^{i}_{\hspace{2pt}\bar{\mu}}$ is a transformation between the orthonormal laboratory frame basis and the coordinate basis. We have made the choice $e^{\mu}_{\hspace{2pt}\bar{0}}=n^{\mu}$, and $\bar{v}_{\hat{i}}$ are three-velocity components in the orthonormal laboratory frame basis ($\bar{v}_{\bar{i}}=\bar{v}^{\bar{i}}=\bar{v}_{\hat{i}}=\bar{v}_{\hat{i}}$), so that $v^{i}=e^{i}_{\hspace{2pt}\hat{i}}\bar{v}^{\hat{i}}$, where the notation $e^{i}_{\hspace{2pt}\hat{i}}=e^{i}_{\hspace{2pt}\bar{i}}\delta^{\bar{i}}_{\hspace{2pt}\hat{i}}$ is used. To close the one-moment (MGFLD) model, \citet{RaJuJa19} replace the momentum density by the gradient of the energy density: \begin{eqnarray} \mathcal{H}^{\hat i}\longrightarrow - D \frac{e^{k \hat i}}{\alpha^3} \partial_k (\alpha^3 \mathcal{J})~, \label{eq:tr_fld_flux} \end{eqnarray} where $D$ is the diffusion coefficient, which they express in terms of the flux-limiter $\lambda\in[0,1/3]$ and the total opacity $\kappa_t$ as \begin{eqnarray} D \equiv \frac{\lambda}{\kappa_\mathrm{t}}. \end{eqnarray} For Levermore--Pomraning and Wilson flux-limiting, \begin{eqnarray}\label{eq:limiterLPW} \lambda_\mathrm{LP} & \equiv & \frac{2+R}{6+3R+R^2}~, \nonumber \\ \lambda_\mathrm{Wilson} & \equiv & \frac{1}{3+R}~, \end{eqnarray} respectively, where \citet{RaJuJa19} define the generalized Knudsen number as \begin{eqnarray}\label{eq:knudsen} R & \equiv & \frac{|e^{k \hat i}\partial_{k} (\alpha^3\mathcal{J})|}{\kappa_\mathrm{t}\alpha^3\mathcal{J}}. \end{eqnarray} Thus, when the opacity is high, $R\to0$ and $\lambda\to1/3$. On the other hand, when the opacity is low, $\lambda\to1/R$ and \begin{equation} \mathcal{H}^{\hat i}\to-\f{e^{k \hat i}\partial_{k} (\alpha^3\mathcal{J})}{|e^{k \hat i}\partial_{k} (\alpha^3\mathcal{J})|}\,\mathcal{J}. \end{equation} The Eddington tensor is related to the neutrino radiation stress tensor: \begin{eqnarray}\label{eq:edd_tensor1} \chi^{\hat i \hat j} & = & \frac{\mathcal{K}^{\hat i \hat j}}{\mathcal{J}}~. \end{eqnarray} In the MGFLD approximation, the Eddington tensor, which appears in the expression for $\hat{\mathcal{R}}_{\varepsilon}$, takes a form analogous to Eq.~\eqref{eq:radiationStressTensor}: \begin{eqnarray}\label{eq:edd_tensor_fld} \chi^{\hat i \hat j} = \frac{1}{2} [(1-\chi)\delta^{\hat i \hat j} + (3\chi-1)h^{\hat i}h^{\hat j}]~. \end{eqnarray} In Eq.~\eqref{eq:edd_tensor_fld}, $h^{\hat{i}}$ is a unit vector in the direction of the neutrino flux, $\mathcal{H}^{\hat {i}}$, and $\chi$ is the Eddington factor, which is given by \begin{eqnarray}\label{eq:edd_factor_fld} \chi = \lambda + (\lambda R)^2~. \end{eqnarray} \section{Neutrino interactions} \label{sec:interactions} The phenomenon of core-collapse supernovae is a magnificent juxtaposition of the macroscopic physics of neutrino radiation hydrodynamics and the microscopic physics of neutrino weak interactions and the nuclear equation of state. In particular, the weak interactions between the neutrinos and the matter are what make neutrinos important to this phenomenon. Thus, any review of neutrino transport in core-collapse supernovae must include a discussion of such interactions. In the history of core-collapse supernova modeling, there have been many important examples of studies that have demonstrated the impact of additional weak interaction physics and/or improved treatments of such physics in supernova models. Here we select a subset of these studies, each selected to investigate one of the dimensions of this component of supernova modeling: (1) The impact of the addition of new weak-interaction channels. (2) The impact of improved treatments of channels that have already been included in the models. (3) The interplay between different weak-interaction channels and the impact of adding/changing more than one weak-interaction channel at a time in a model. (4) The uncertainties in the weak-interaction rates currently used in core-collapse supernova models and their ramifications for core-collapse supernova modeling. \subsection{An intertwined history} Looking back at the history of the development of the theory of weak interactions and of core-collapse supernovae, especially during the time frame after the discovery and publication of the electroweak theory, it becomes obvious that (1) the first period of what can be called modern core-collapse supernova theory, after the publication of the seminal work of Colgate and White, was greatly influenced and greatly accelerated by the new electroweak theory, for more than a decade, and (2) the interplay between advancing descriptions of neutrino weak interactions and core-collapse supernovae continued well beyond this period, even to this day. A year after the publication of the Colgate and White work, the electroweak theory was published \citep{Wein67,Salam1968}. It was specifically the advent of weak neutral currents that would turn out to be a game changer for core-collapse supernova theory. Seven years after the publication of the electroweak theory, \citet{Freedman1974} showed that, owing to weak neutral currents, neutrinos could scatter coherently off the nucleons in a nucleus, introducing an $A^2$ dependence in the cross section, where $A$ is the atomic number. During stellar core collapse, the core is neutronized through the emission and escape of electron neutrinos. As a result, the core nuclei become large---i.e., have large $A$---given that the nuclear size is a competition between Coulomb repulsion and surface tension, the former favoring smaller nuclei, the latter favoring larger nuclei, and the latter winning out. In turn, coherent nuclear scattering cross sections become large. Following Freedman's discovery and publication, Tubbs and Schramm provided an electroweak-theory-based set of cross sections for problems of astrophysical interest \citep{TuSc75}. Subsequently, these were implemented in the pioneering work of \citet{Arnett1977}, wherein he showed that coherent nuclear scattering led to the trapping of electron neutrinos during stellar core collapse and to the development of a trapped Fermi sea of them in the core. This provided the foundation for the discovery five years later by Wilson that the stalled core-collapse supernova shock wave could be revived by charged-current mediated electron neutrino and antineutrino absorption on the shock-liberated nucleons behind it \citep{Wilson1985,BeWi85}, which marked the beginning of contemporary core-collapse supernova theory, which has largely operated within the framework of the delayed-shock or, equivalently, the neutrino-reheating mechanism. The fifteen years between 1966 and 1982 saw the fundamental and significant advance from the first models of core-collapse supernovae to the establishment of the framework within which all core-collapse supernova modelers operate today. The developments in core-collapse supernova theory during these first fifteen years were very tightly intertwined with the development of weak interaction physics. While this period was certainly unique in this regard, additional milestones, owing to further development in the theory of neutrino interactions in the environments of interest here, occurred since. In 1985, Bruenn published a landmark paper on the physics of stellar core collapse \citep{Bruenn1985}. Bruenn included the following electron neutrino emissivities and opacities in his models, which have come to be known as the ``Bruenn 85'' opacity set. Subsequent to Bruenn's publication and prior to the publications discussed below, this set was frozen in as the canonical neutrino opacity set. It is still used today in code tests and comparisons. Bruenn included electron capture on (free) protons and nuclei and the inverse interactions of electron neutrino absorption, as well as scattering on (free) nucleons and electrons and coherent scattering on nuclei in his models. For electron antineutrino and heavy-flavor neutrino production, electron--positron pair annihilation served as the dominant source after core bounce and shock formation. \citet{HaRa98} computed the production of neutrino--antineutrino pairs from nucleon--nucleon bremsstrahlung. Prior to the recognition that such brems\-strahlung could lead to, and perhaps dominate, neutrino pair production, pair production occurred only through electron--positron pair annihilation. Thus, particularly for the muon and tau neutrino flavors, which have only pair production as sources, brems\-strahlung production introduced a fundamental change. Figures~\ref{fig:brem4ms} and \ref{fig:brem100ms} show the relative importance of nucleon--nucleon bremsstrahlung for the production of electron neutrino--antineutrino pairs of all three flavors, relative to the production by electron--positron annihilation. The results shown are for two times after core bounce, at 4 and 100 ms, in a core-collapse supernova model performed with the \textsc{Chimera} code, initiated from an $18\,M_\odot$ progenitor. \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{brem4ms.pdf} \caption{The neutrino number production rate due to neutrino--antineutrino pair production via electron--positron annihilation and nucleon--nucleon bremsstrahlung are plotted. Also shown are the thermalization surfaces for the different neutrino and antineutrino flavors, as well as the shock location at this time after bounce: 4 ms. The data used to generate the plot are taken from a \textsc{Chimera} model using an $18\,M_\odot$ progenitor. At the high densities present at radii below $\sim$10 km in the core, pair production from bremsstrahlung dominates. On the other hand, between the heavy-flavor thermalization surfaces and the shock, production by electron--positron pair annihilation is consistently larger.} \label{fig:brem4ms} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{brem100ms.pdf} \caption{The same as in Fig.~\ref{fig:brem4ms} but at a time of 100 ms after bounce, during the critical shock reheating epoch. At this time, at radii above $\sim$25 km, which is well below the thermalization spheres where the neutrino spectra set in, neutrino pair production is dominated by electron--positron pair annihilation. At high densities, below $\sim$10 km, production due to bremsstrahlung continues to dominate.} \label{fig:brem100ms} \end{figure} In the same year, \citet{BuSa98} and \citet{RePrLa98} took on the long-term challenge to understand neutrino interactions in dense, \emph{interacting}, nuclear matter, taking into account nucleon recoil, degeneracy, relativity, thermal motions, and correlations. In particular, these authors computed new differential scattering rates (and new charged-current absorption and emission rates), which were no longer iso-energetic, as had been assumed before (e.g., in the Bruenn 85 opacity set), but resulted in small energy transfer between the neutrinos and the nucleons. Per scattering, the amount of energy transferred would be of little consequence, but taken over all of the scattering events in the dense environment in the vicinity of the neutrinospheres, such small-energy scattering has a notable impact. \citet{MuJaMa12} were the first to demonstrate this. In particular, they showed that small-energy scattering of heavy flavor neutrinos by nucleons at the electron neutrino- and antineutrino-spheres led to heating of these neutrinospheres and, consequently, an increase in the electron neutrino flavor luminosities. Their results are shown in Fig.~\ref{fig:MuJaMa12Fig17}. This in turn impacted neutrino shock reheating. In the absence of small-energy scattering on nucleons, shock revival was delayed by 50--100 ms relative to their baseline model. \begin{figure}[htb] \includegraphics[width=\textwidth]{MuJaMa12Fig17.pdf} \caption{Plotted are the neutrino and antineutrino luminosities for all three flavors of neutrinos, as a function of density, at 400 ms after bounce in the general relativistic model of \citet{MuJaMa12} initiated from a $15\,M_\odot$ progenitor. Solid lines show data from the model that includes neutrino--nucleon small-energy scattering. Evident in the plots is the $\sim$20\% increase in both the electron neutrino and antineutrino luminosities, at a density of $10^{11}\mathrm{\ g\ cm}^{-3}$, due to the heating of the electron neutrinospheres resulting from the scattering of higher-energy heavy flavor neutrinos, emanating from deeper regions, on nucleons in the neutrinospheric region.} \label{fig:MuJaMa12Fig17} \end{figure} In 2003, yet another source of heavy-flavor neutrino pair production was introduced. \citet{BuJaKe03} examined the production of heavy-flavor neutrino pairs through the annihilation of electron-neutrino pairs. They too found that heavy-flavor pair production by electron-flavor pair annihilation dominated the production of such pairs through electron--positron pair annihilation. Moreover, they found that the inclusion of this mode of heavy-flavor production in their model boosted the heavy-flavor luminosities during the first $\sim 150$ ms after bounce and decreased the electron-flavor luminosities after $\sim 200$ ms. They found too that their shock was weaker and reached a smaller peak radius when electron-flavor pair annihilation was included. While the differences were not ``dramatic,'' they concluded they were also not ``negligible.'' And once again in the same year, progress was made on a different front. The rates for electron capture on nuclei in the Bruenn 85 opacity set are based on the Independent Particle Model for the nucleons in the nucleus. That is, the IPM assumes that the nucleons are noninteracting. Under this assumption, the final neutron states are filled for nuclei with $N>40$, which is true of the stellar core nuclei, and electron capture is blocked, relying in turn solely on capture on protons. This assumption was finally removed, as nuclear structure models developed. In 2003, rates for electron capture on nuclei using a ``hybrid'' model, wherein thermal excitation and nucleon--nucleon correlations were both accounted for, were recomputed \citep{LaMaSa03}. Owing to the improved description, capture in nuclei was no longer blocked and in fact dominates capture on protons during core collapse, resulting in a more neutronized/deleptonized core, a smaller inner core, and a deeper shock formation mass \citep{HiMeMe03}. \begin{figure}[htbp] \includegraphics[width=\textwidth]{bounce_prl.pdf} \caption{From \citet{HiMeMe03}, plots of the density, entropy per baryon, electron fraction, and fluid velocity at core bounce using data from two models: one implementing the ``Bruenn 85'' electron capture rates \citep{Bruenn1985}, based on the Independent Particle Model of nucleons, and one implementing the rates of \citet{LaMaSa03}, based on the ``Hybrid'' model, which includes correlations between the nucleons in RPA and finite-temperature effects. The former (latter) data correspond to the thin (thick) black lines in the plots. Given the inclusion of the hybrid model electron capture rates, electron capture is unblocked and proceeds, leading to increased electron capture in the core in this model and, consequently, to a significant (inward) change in the location of the bounce shock in mass.} \label{fig:bounce_prl} \end{figure} The importance of the above additions and modifications to the neutrino opacities in core collapse supernova models were reinforced in the context of later two-dimensional models developed by other groups \citep{BuVaDo18,JuBoJa18,KoTaFi18}. Earlier in this section, we have seen the impact of adding new weak interaction channels and improving the treatment of those already included in core-collapse supernova model. Here we explore yet another dimension of this important sector of core-collapse supernova physics: the interplay of neutrino weak interaction channels (new and/or modified). \citet{LeMeMe12b} conducted an in-depth analysis focused largely on the neutrino production and interaction channels discussed above (i.e., nucleon--nucleon bremsstrahlung, non-isoenergetic scattering, and electron capture on nuclei). They demonstrated several important points: (1) While the addition of a single interaction channel may impact the dynamics of stellar core collapse and the post-bounce evolution, the addition of two interaction channels may not be additive---in fact, it may render one of the additional channels irrelevant. (2) When two or more interaction channels are included and are instead additive, the additive impact may be nonlinear. As an example, Lentz et~al.\ considered the interplay of electron capture on nuclei and neutrino--electron scattering during stellar core collapse. If we consider the nucleons as independent particles [Independent Particle Model (IPM)], electron capture on nuclei is blocked for $N>40$, where $N$ is the neutron number. In this case, the nuclear electron capture rates are given by \citet{Bruenn1985} are appropriate. In this instance, neutrino--electron scattering, which scatters neutrinos to lower energies given the core's electron degeneracy, leads to a significant increase in core deleptonization and a concommitant decrease in the inner core mass. On the other hand, if the improved nuclear electron capture rates of Langanke et~al.\ are used, which factor in nucleon interactions and correlations, nuclear electron capture is no longer blocked. In turn, low-energy neutrino states are filled, and neutrino--electron scattering is no longer able to down scatter neutrinos in energy (and contributes very little to the total neutrino opacity) and becomes rather unimportant. This is captured in Fig.~\ref{fig:fig1lentz12}. Comparing, for example, the velocity at bounce in the upper left panel of Fig.~\ref{fig:fig1lentz12} for the cases ``Base,'' which includes the full set of neutrino weak interactions with ``Base--noNES,'' which leaves out neutrino--electron scattering, it is obvious there is no difference. This is also true of all of the other quantities plotted. On the other hand, a comparison between ``Base'' and ``IPA,'' which includes nuclear electron capture in the independent particle approximation, it is evident that neutrino--electron scattering had a significant impact during collapse and on the final shock formation location. \begin{figure}[htb] \includegraphics[width=\textwidth]{fig1lentz12.pdf} \caption{Plots of velocity, density, entropy, temperature, electron and lepton fraction, and pressure at core bounce across five models with different input physics \citep{LeMeMe12b}. The model ``Base'' includes all weak interactions and uses the modern, hybrid-model electron capture rates. Model Base-NoNES includes the same weak interaction physics, with one exception: neutrino-electron scattering (NES) is not included. Similarly, model ``IPA'' includes all weak interaction channels, as does model Base, but uses the Independent Particle Approximation (IPA) rates for nuclear electron capture. And model ``IPA-NoNES'' includes the same weak interaction physics except neutrino--electron scattering. Comparing models Base and Base-NoNES, no significant changes result when NES is excluded. On the other hand, comparing models IPA and IPA-NoNES, we reach a different conclusion: In this case, the inclusion of NES has a significant impact on core deleptonization and, consequently, on the mass of the inner core at bounce. These comparisons demonstrate there is an interplay between different neutrino opacities. An improvement in one opacity may render an otherwise important second opacity relatively unimportant.} \label{fig:fig1lentz12} \end{figure} That the search for all core collapse-supernova relevant neutrino weak interactions is an ongoing activity is no better illustrated than by the very recent example provided by \citet{BoJaLo17}, whose work illuminated the importance of including muons and neutrino--muon weak interactions in core-collapse supernova models. Past models assumed that the population of muons in the stellar core during collapse, bounce, and the post-bounce neutrino shock reheating epoch would remain low given the large rest mass of the muon. Bollig et~al.\ point out that such arguments are not well motivated. The electron chemical potential in the proto-neutron star at this time exceeds the muon rest mass, and the core temperature is large, as well. In the context of two-dimensional supernova models using the \textsc{Vertex} code and initiated from a $20\,M_\odot$ progenitor, they demonstrated that significant populations of muons are in fact produced and, more importantly, that the inclusion of muons in their supernova models impacted the outcomes quantitatively in all cases and even qualitatively in some cases, depending on the nuclear equation of state used. For the SFHo equation of state, models with muons exhibited explosion whereas counterpart models without them did not. For models with the LS220 equation of state, models with muons exhibited earlier explosions, indicating that explosion was facilitated in these models. Bollig et al.'s results are encapsulated in Fig.~\ref{fig:shockwithmuons}. \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{shockwithmuons.pdf} \caption{In the upper left panel, from \citet{BoJaLo17}, the angle-averaged shock trajectories for several models, excluding and including muons, using the Steiner--Fischer--Hempel (SFHo) equation of state, are plotted. In the upper right panel, plotted are the results from the models that instead use the Lattimer--Swesty equation of state with bulk compression modulus $K=220$ MeV (LS220). Here ``Standard'' indicates models without muons.} \label{fig:shockwithmuons} \end{figure} We close this section with an emphasis on one final important point: Like all weak interaction cross sections, those the community has found to be important to core-collapse supernova evolution and has included in its supernova models have uncertainties associated with them, which can arise from experimental uncertainties in the few cases where the cross sections have been measured directly, or from uncertainties in the theory used to predict them, which in the end the supernova modeling community must rely on given it is impossible to measure all relevant cross sections under all relevant thermodynamic conditions and at all relevant neutrino energies found in a supernova environment. Thus, it is important to explore the potential impact of such uncertainties on the quantitative and qualitative core-collapse supernova model outcomes. \begin{figure}[htb] \includegraphics[width=\textwidth]{strangeness.pdf} \caption{Results are shown here from the core-collapse supernova studies of \citet{MeJaBo15}. In particular, in the uppermost left panel, the angle-averaged shock radius is plotted for two pairs of models, one for the two-dimensional case and one for the three-dimensional case. All cases are launched from a $20\,M_\odot$ progenitor and were performed with the \textsc{Vertex} code. Within each case, two simulations were performed, one using the standard weak interaction cross section for neutrino--nucleon scattering and the other including a correction to the strangeness content of the nucleon, which results in a correction to the coupling constants. In two dimensions, both models explode, with some quantitative differences observed in the shock trajectories. In the more important three-dimensional case, the outcomes with and without the correction are \emph{qualitatively} different. Specifically, explosion is not obtained in their model unless the opacity correction is included.} \label{fig:strangeness} \end{figure} Case in point: The exploration of the impact of the uncertainty in the neutrino--nucleon cross section. \citet{MeJaBo15} performed two state-of-the-art three-dimensional simulations of the core-collapse supernova explosion of a $20\,M_\odot$ progenitor. In one case, they included what the modeling community regarded at the time as the state-of-the-art neutrino weak interaction set, with no modification to any of the cross sections. In the other, they varied one of the cross sections, albeit a critical one: the cross section for neutrino scattering on nucleons. This cross section is one of the most important for neutrino transport below the neutrinospheres, as the leading opacity source and, as we saw above, as an additional heating source for matter within the proto-neutron star. Uncertainty in the cross section for neutrino--nucleon scattering arises from, among other things, uncertainty in the strangeness content of the nucleon, which can alter the coupling constants. In particular, Melson et~al.\ varied the cross section by $\sim$10\%, consistent with the experimental uncertainties, and in so doing found they could \emph{qualitatively} alter the outcome of the model. When the standard weak interaction set was used, they did not obtain an explosion in the model. When they varied the neutrino--nucleon cross section, they did. The results are shown in Fig.~\ref{fig:strangeness}. Of course, we have already seen that variations in a particular cross section can interact with variations in another. The only way the supernova modeling community can accurately assess the impact of variations in a single cross section is to vary all of them, in a statistically meaningful way---i.e., perform a sensitivity study. And, obviously, this should be performed, at least ultimately, in the context of three-dimensional models. Unfortunately, the last requirement cannot be met at this time. Such a study would require that many three-dimensional models be performed, which at the moment, even with the significant computing power afforded the modeling community by today's supercomputers, is prohibitive. Such studies should be conducted, but they will have to wait for future supercomputing capabilities. \begin{landscape} \begin{table} \caption{Relevant modern neutrino emissivities and opacities, most or all of which have been adopted in three-dimensional core-collapse supernova models.} \label{tab:neutrinoopacities} \begin{tabular}{|c|c|c|} \hline Category & Weak Ineraction & Opacity Source \\ \hline Absorption and Emission & $e^{-}+p\rightleftarrows n+\nu_e$ & \citet{BuSa98}, \citet{RePrLa98}, \citet{Horowitz2002} \\ & $e^{+}+n\rightleftarrows p+\bar{\nu}_e$ & \citet{BuSa98}, \citet{RePrLa98}, \citet{Horowitz2002} \\ & $e^{-}+A(Z,N)\rightleftarrows A(Z-1,N+1)+\nu_e$ & \citet{LaMaSa03} \\ \hline Coherent Isoenergetic Scattering & $\nu_{e,\mu,\tau},\bar{\nu}_{e,\mu,\tau} + A \rightarrow \nu_{e,\mu,\tau},\bar{\nu}_{e,\mu,\tau} + A$ & \citet{BrMe97}, \citet{Horowitz1997}\\ \hline Non-Isoenergetic Scattering & $\nu_{e,\mu,\tau},\bar{\nu}_{e,\mu,\tau} + e^{-,+} \rightarrow \nu_{e,\mu,\tau},\bar{\nu}_{e,\mu,\tau} + e^{+,-}$ & \citet{MeBr93c} \\ & $\nu_{e,\mu,\tau},\bar{\nu}_{e,\mu,\tau} + n,p \rightarrow \nu_{e,\mu,\tau},\bar{\nu}_{e,\mu,\tau} + n,p$ & \citet{BuSa98}, \citet{RePrLa98}, \citet{Horowitz2002} \\ \hline Pair Creation and Annihilation & $e^{+}+e^{-} \rightleftarrows \nu_{e,\mu,\tau} + \bar{\nu}_{e,\mu,\tau}$ & \citet{Bruenn1985} \\ & $N+N \rightleftarrows N+N+\nu_{e,\mu,\tau} + \bar{\nu}_{e,\mu,\tau}$ & \citet{HaRa98} \\ & $\nu_{e} + \bar{\nu}_{e}\rightleftarrows \nu_{\mu,\tau} + \bar{\nu}_{\mu,\tau}$ & \citet{BuJaKe03} \\ \hline \end{tabular} \end{table} \end{landscape} \subsection{The relevant neutrino interactions} \label{sec:neutrinoInteractions} The previous section makes clear that the effort to ascertain which neutrino weak interactions are important to core-collapse supernovae theory is an ongoing activity. To date, the list included in Table \ref{tab:neutrinoopacities} is what is deemed to be the essential list. Most, if not all, of the weak interactions in the list have been included in the state of the art simulations whose underlying numerical methods have been the focus of this review. Motivated by the recent example documented in the previous section, in Table \ref{tab:muonopacities} we also include a list of the relevant neutrino weak interactions involving muons. At present, these have been included by only one group \citep{BoJaLo17} and, as discussed, have been found to be important by this group. In light of this, adoption of these weak interactions by other groups is certainly warranted. \subsubsection{Boltzmann collision term} \begin{table} \caption{Relevant neutrino--muon weak interactions.} \begin{center} \begin{tabular}{|c|} \hline $\mu^{-} + p \rightleftarrows n+ \nu_{\mu}$ \\ $\mu^{-} + \nu_{e} \rightleftarrows e^{-} + \nu_{\mu}$ \\ $\mu^{-} + \bar{\nu}_{\mu}+ \rightleftarrows e^{-} + \bar{\nu}_{e}$ \\ $\mu^{-} \rightleftarrows e^{-} + \bar{\nu}_{e} + \nu_{\mu}$ \\ \hline $\mu^{+} + n \rightleftarrows p + \bar{\nu}_\mu$ \\ $\mu^{+} + \bar{\nu}_e \leftrightarrows e^{+} + \bar{\nu}_\mu$ \\ $\mu^{+} + \nu_\mu \leftrightarrows e^{+} + \nu_e$ \\ $\mu^{+} \leftrightarrows e^{+} + \nu_e + \bar{\nu}_\mu$ \\ \hline $\nu_{e,\mu,\tau} + \mu^{+,-} \rightleftarrows \nu_{e,\mu,\tau} + {\mu^{+,-}}$ \\ \hline \end{tabular} \end{center} \label{tab:muonopacities} \end{table} We write the collision term as the sum of terms corresponding to the main processes---emission and absorption, scattering, and pair creation and annihilation---listed in Table \ref{tab:neutrinoopacities}: \begin{equation} \mathcal{C}[f_{s}](p) = \mathcal{C}_{\mbox{\tiny{\sc AbEm}}}[f_{s}](p) + \mathcal{C}_{\mbox{\tiny{\sc Scat}}}[f_{s}](p) + \mathcal{C}_{\mbox{\tiny{\sc Pair}}}[f_{s}](p). \label{eq:collisionTermBoltzmannSum} \end{equation} For each of the terms, we focus on its functional form, which is closely related to the computational complexity of including a particular weak interaction in a core-collapse supernova model. Each term---hence, each added interaction---warrants tailored consideration. The term due to neutrino emission and absorption is written as \begin{equation} \f{1}{\varepsilon}\,\mathcal{C}_{\mbox{\tiny{\sc AbEm}}}[f_{s}](p) = [1-f_{s}(p)]\eta_{s} - \chi_{s}\,f_{s}(p), \label{eq:collisionTermAmEm} \end{equation} where $\eta_{s}$ and $\chi_{s}$ are the emissivity and absorption opacity of neutrino species $s$ and are assumed to be isotropic in the momentum-space angle (independent of $\omega$), but depend on the neutrino energy $\varepsilon$. The blocking factor, $1-f_{s}(p)$, is included to account for the Fermi--Dirac statistics of neutrinos, and suppresses neutrino emission when the phase-space occupancy is high (i.e., when $f_{s}\lesssim1$). It is common to introduce $\tilde{\chi}_{s}=(\eta_{s}+\chi_{s})$, associated in this case with ``stimulated absorption'' (as opposed to the stimulated emission of photons), and to define $f_{0,s}=\eta_{s}/\tilde{\chi}_{s}$, in which case Eq.~\eqref{eq:collisionTermAmEm} can be written in relaxation form: \begin{equation} \f{1}{\varepsilon}\,\mathcal{C}_{\mbox{\tiny{\sc AbEm}}}[f_{s}](p) = \tilde{\chi}\,\big(\,f_{0,s}-f_{s}\,\big). \label{eq:collisionTermAmEmRelaxation} \end{equation} In this form it is easy to see that the collision term drives the distribution function towards the equilibrium distribution, $f_{0,s}$. Also note, this interaction is local in momentum-space; i.e., there is no coupling across momentum-space. Neutrino--matter scattering (the second and third category in Table~\ref{tab:neutrinoopacities}) is described by \begin{align} \f{1}{\varepsilon}\,\mathcal{C}_{\mbox{\tiny{\sc Scat}}}[f_{s}](p) &=\big(1-f_{s}(p)\big)\int_{V_{p}}\mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}}(p,p')\,f_{s}(p')\,d^{3}p' \nonumber \\ &\hspace{12pt} -f_{s}(p)\int_{V_{p}}\mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc Out}}}(p,p')\,(1-f_{s}(p'))\,d^{3}p', \label{eq:collisionTermScat} \end{align} where $\mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}}(p,p')$ is the scattering rate from momentum $p'$ into $p$, and $\mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc Out}}}(p,p')$ is the scattering rate out of momentum $p$ into $p'$. When compared with the collision term in Eq.~\eqref{eq:collisionTermAmEmRelaxation}, the coupling in momentum-space (due to the integral operators) increases the computational complexity of evaluating the collision operator. If momentum-space is discretized into $N_{p}$ bins, a brute force evaluation of Eq.~\eqref{eq:collisionTermScat} for all $p$ requires $\mathcal{O}(N_{p}^{2})$ operations. Note also the blocking factors in Eq.~\eqref{eq:collisionTermScat}, which suppress scattering to high-occupancy regions of momentum-space. The second category in Table~\eqref{tab:neutrinoopacities} (coherent, isoenergetic scattering) is obtained as a simplification of Eq.~\eqref{eq:collisionTermScat} by letting \begin{equation} \mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}/\mbox{\tiny{\sc Out}}}(p,p') \to \mathcal{R}_{\mbox{\tiny{\sc Iso}}}(|p|,\cos\alpha)\,\delta(|p|-|p'|), \end{equation} where $\cos\alpha=p \cdot p'/(|p||p'|)$. For this type of interaction, with $d^{3}p'=|p'|^{2}\,d|p'|\,d\omega'$, the collision term is given by \begin{align} \f{1}{\varepsilon}\,\mathcal{C}_{\mbox{\tiny{\sc Iso}}}[f_{s}](|p|,\omega) &=\int_{\mathbb{S}^{2}}\mathcal{R}_{\mbox{\tiny{\sc Iso}}}(|p|,\cos\alpha)\,|p|^{2}\,f_{s}(|p|,\omega')\,d\omega' \nonumber \\ &\hspace{12pt} -f_{s}(|p|,\omega)\int_{\mathbb{S}^{2}}\mathcal{R}_{\mbox{\tiny{\sc Iso}}}(|p|,\cos\alpha)\,|p|^{2}\,d\omega', \label{eq:collisionTermScatIso} \end{align} which is considerably simplified relative to the scattering operator in Eq.~\eqref{eq:collisionTermScat}. Finally, neutrino-antineutrino pair creation and annihilation (e.g., from electron-positron pairs; the fourth category in Table~\ref{tab:neutrinoopacities}) is described by \begin{align} \f{1}{\varepsilon}\,\mathcal{C}_{\mbox{\tiny{\sc Pair}}}[f_{s}](p) &=(1-f_{s}(p))\int_{V_{p}}\mathcal{R}_{\mbox{\tiny{\sc Pair}}}^{\mbox{\tiny{\sc In}}}(p,p')\,(1-\bar{f}_{s}(p'))\,d^{3}p' \nonumber \\ & \hspace{12pt} -f_{s}(p)\int_{V_{p}}\mathcal{R}_{\mbox{\tiny{\sc Pair}}}^{\mbox{\tiny{\sc Out}}}(p,p')\,\bar{f}_{s}(p')\,dp', \label{eq:collisionTermPair} \end{align} where $\mathcal{R}_{\mbox{\tiny{\sc Pair}}}^{\mbox{\tiny{\sc In}}}(p,p')$ and $\mathcal{R}_{\mbox{\tiny{\sc Pair}}}^{\mbox{\tiny{\sc Out}}}$ are the neutrino-antineutrino pair production and annihilation rates, respectively, and $\bar{f}_{s}$ is the antineutrino distribution function. We note that the functional form of the collision term for the last of the pair processes included in Table \ref{tab:neutrinoopacities} is not represented by the functional form for pair creation and annihilation presented here. In this particular case, both in-states and both out-states correspond to neutrinos, which, when treated without approximation, results in a collision term involving four distribution functions. This non-approximate treatment of the process has yet to be implemented in core-collapse supernova models. As a result, we do not include its functional form here. All of the above rates $\eta_{s}$, $\chi_{s}$, $\mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}/\mbox{\tiny{\sc Out}}}$, and $\mathcal{R}_{\mbox{\tiny{\sc Pair}}}^{\mbox{\tiny{\sc In}}/\mbox{\tiny{\sc Out}}}$ depend on the thermodynamic state of the stellar core fluid (e.g., $\rho$, $T$, and $Y_{e}$). Symmetries in some of the collision kernels exist \citep[e.g.,][]{Bruenn1985,Ce94}, which should be leveraged in computations. First, because the total number of neutrinos is conserved in neutrino--matter scattering, \begin{equation} \int_{V_{p}}\mathcal{C}_{\mbox{\tiny{\sc Scat}}}[f_{s}](p)\,\f{d^{3}p}{\varepsilon}=0, \end{equation} and the following in--out invariance holds: \begin{equation} \mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}}(p,p') = \mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc Out}}}(p',p). \label{eq:inOutInvarianceScattering} \end{equation} Second, when the neutrino distribution function equals the local Fermi--Dirac distribution, $f_{s}=f_{0,s}=1/[e^{(\varepsilon-\mu_{\nu,s})/T}+1]$, where $T$ is the matter temperature and $\mu_{\nu}$ is the equilibrium neutrino chemical potential, the net energy and momentum transfer between neutrinos and matter due to scattering must vanish. Thus, requiring \begin{equation} \int_{V_{p}}\mathcal{C}_{\mbox{\tiny{\sc Scat}}}[f_{0,s}](p)\,g(p)\,\f{d^{3}p}{\varepsilon}=0 \end{equation} for an arbitrary function $g(p)$, gives \begin{equation} \mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}}(p,p') = \mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc Out}}}(p,p')\,e^{-(\varepsilon-\varepsilon')/T}=\mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}}(p',p)\,e^{-(\varepsilon-\varepsilon')/T}, \end{equation} where Eq.~\eqref{eq:inOutInvarianceScattering} is used in the rightmost expression. \subsubsection{Two-moment collision terms} \label{sec:collisionTermsTwoMoment} Collision terms for the two-moment model are derived by taking angular moments of the collision term in Eq.~\eqref{eq:collisionTermBoltzmannSum}. Such terms have been discussed in the context of multidimensional two-moment models by, e.g., \citet{ShKiSe11}. For completeness, we list two-moment collision terms corresponding to angular moments of Eqs.~\eqref{eq:collisionTermAmEmRelaxation}, \eqref{eq:collisionTermScat}, and \eqref{eq:collisionTermPair} here. Considering the two-moment models delineated in Sect.~\ref{sec:TwoMoment}, the relevant angular moments of the Boltzmann collision term are \begin{equation} \f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}[f_{s}]\,\f{d\omega}{\varepsilon} \quad\text{and}\quad \f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}[f_{s}]\,\ell_{j}\,\f{d\omega}{\varepsilon}. \end{equation} (The first of these terms also appears in the one-moment model discussed in Sect.~\ref{sec:oneMomentKinetics}; cf.\ Eq.~\eqref{eq:spectralLagrangianEnergyEquationFLD_3p1}.) \paragraph{Emission/absorption} For emission and absorption, the evaluation is straightforward since the emissivity and opacity are isotropic in momentum space angle. The zeroth moment gives \begin{align} \f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}_{\mbox{\tiny{\sc AbEm}}}[f_{s}]\,\f{d\omega}{\varepsilon} =\big(1-\mathcal{D}_{s}\big)\,\eta_{s} - \chi_{s}\,\mathcal{D}_{s} =\tilde{\chi}_{s}\,\big(\,\mathcal{D}_{0,s}-\mathcal{D}_{s}\,\big), \end{align} where the zeroth moment of the equilibrium distribution is defined as \begin{equation} \mathcal{D}_{0,s}=\f{1}{4\pi}\int_{\mathbb{S}^{2}}f_{0,s}\,d\omega. \end{equation} The first moment gives \begin{equation} \f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}_{\mbox{\tiny{\sc AbEm}}}[f_{s}]\,\ell_{j}\,\f{d\omega}{\varepsilon} =-\tilde{\chi}_{s}\,\mathcal{I}_{s,j}, \end{equation} since the angular moment of $\ell_{j}$ vanishes. \paragraph{Angular kernel approximations} To incorporate scattering and pair processes in the two-moment model, following \citet{Bruenn1985}, the kernels are expanded in a Legendre series up to linear order; e.g., \begin{equation} \mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}}(p,p') = \mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon',\Omega) \approx \Phi_{\mbox{\tiny{\sc Scat}},0}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon') + \Phi_{\mbox{\tiny{\sc Scat}},1}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\Omega(\omega,\omega'), \label{eq:kernelExpansion} \end{equation} where $\Omega=\ell_{\mu}(\omega)\ell^{\mu}(\omega')$ is the cosine of the scattering angle. From the orthogonality of the Legendre polynomials, the scattering coefficients are then evaluated form the kernels as \begin{equation} \big\{\,\Phi_{\mbox{\tiny{\sc Scat}},0}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon'),\Phi_{\mbox{\tiny{\sc Scat}},1}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\big\} =\f{1}{2}\int_{-1}^{1}\mathcal{R}_{\mbox{\tiny{\sc Scat}}}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon',\Omega)\,\big\{\,1,\,3\,\Omega\,\big\}\,d\Omega. \end{equation} Terms beyond linear can be included in the expansion of the kernel in Eq.~\eqref{eq:kernelExpansion} at the expense of a more complicated collision operator for the two-moment model. \citet{SmCe96} investigated the effect of including the quadratic term for neutrino-electron scattering in a configuration during the infall phase of stellar core collapse. They found that including the quadratic term results in a better fit to the scattering kernel, but when comparing stationary state transport solutions with and without the quadratic term, they found no significant difference in relevant quantities such as the neutrino number density, flux, and transfer rates of lepton number, energy, or momentum to the stellar fluid. We also note that \citet{JuBoJa18}, in their Appendix~A, provide expressions for pair processes that include the quadratic term in the Legendre expansion of the kernels. \paragraph{Scattering} Employing the expansion in Eq.~\eqref{eq:kernelExpansion} for the scattering operator gives \begin{align} &\f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}_{\mbox{\tiny{\sc Scat}}}[f_{s}](p)\,\f{d\omega}{\varepsilon} \\ &=\big(1-\mathcal{D}(\varepsilon)\big)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Scat}},0}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\mathcal{D}(\varepsilon')\,dV_{\varepsilon'} -\mathcal{I}_{\mu}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Scat}},1}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\mathcal{I}^{\mu}(\varepsilon')\,dV_{\varepsilon'} \nonumber \\ &\hspace{6pt} -\mathcal{D}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Scat}},0}^{\mbox{\tiny{\sc Out}}}(\varepsilon,\varepsilon')\,\big(1-\mathcal{D}(\varepsilon')\big)\,dV_{\varepsilon'} +\mathcal{I}_{\mu}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Scat}},1}^{\mbox{\tiny{\sc Out}}}(\varepsilon,\varepsilon')\,\mathcal{I}^{\mu}(\varepsilon')\,dV_{\varepsilon'} \nonumber \end{align} for the zeroth moment (recall that $dV_{\varepsilon}=4\pi\varepsilon^{2}d\varepsilon$), and \begin{align} &\f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}_{\mbox{\tiny{\sc Scat}}}[f_{s}]\,\ell_{j}\,\f{d\omega}{\varepsilon} \\ &=-\mathcal{I}_{j}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Scat}},0}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\mathcal{D}(\varepsilon')\,dV_{\varepsilon'} +\big(\f{1}{3}\,g_{j\mu}-\widehat{\mathcal{K}}_{j\mu}(\varepsilon)\big)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Scat}},1}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\mathcal{I}^{\mu}(\varepsilon')\,dV_{\varepsilon'} \nonumber \\ &\hspace{12pt} -\mathcal{I}_{j}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Scat}},0}^{\mbox{\tiny{\sc Out}}}(\varepsilon,\varepsilon')\,\big(1-\mathcal{D}(\varepsilon')\big)\,dV_{\varepsilon'} +\widehat{\mathcal{K}}_{j\mu}\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Scat}},1}^{\mbox{\tiny{\sc Out}}}(\varepsilon,\varepsilon')\,\mathcal{I}^{\mu}(\varepsilon')\,dV_{\varepsilon'} \nonumber \end{align} for the first moment. Here we have used \begin{equation} \f{1}{4\pi}\int_{\mathbb{S}^{2}}\ell_{\mu}(\omega)\,\ell_{\nu}(\omega)\,d\omega=\f{1}{3}\,g_{\mu\nu}. \end{equation} \paragraph{Pair processes} Employing the kernel expansion in Eq.~\eqref{eq:kernelExpansion} for the neutrino-antineutrino pair creation and annihilation operator in Eq.~\eqref{eq:collisionTermPair} gives \begin{align} &\f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}_{\mbox{\tiny{\sc Pair}}}[f_{s}](p)\,\f{d\omega}{\varepsilon} \\ &=\big(1-\mathcal{D}(\varepsilon)\big)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Pair}},0}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\big(1-\bar{\mathcal{D}}(\varepsilon')\big)\,dV_{\varepsilon'} +\mathcal{I}_{\mu}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Pair}},1}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\bar{\mathcal{I}}^{\mu}(\varepsilon')\,dV_{\varepsilon'} \nonumber \\ &\hspace{12pt} -\mathcal{D}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Pair}},0}^{\mbox{\tiny{\sc Out}}}(\varepsilon,\varepsilon')\,\bar{\mathcal{D}}(\varepsilon')\,dV_{\varepsilon'} -\mathcal{I}_{\mu}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Pair}},1}^{\mbox{\tiny{\sc Out}}}(\varepsilon,\varepsilon')\,\bar{\mathcal{I}}^{\mu}(\varepsilon')\,dV_{\varepsilon'} \nonumber \end{align} for the zeroth moment of the collision operator, and \begin{align} &\f{1}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}_{\mbox{\tiny{\sc Pair}}}[f_{s}](p)\,\ell_{j}\,\f{d\omega}{\varepsilon} \\ &=-\mathcal{I}_{j}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Pair}},0}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\big(1-\bar{\mathcal{D}}(\varepsilon')\big)\,dV_{\varepsilon'} -\big(\f{1}{3}\,g_{j\mu}-\widehat{\mathcal{K}}_{j\mu}(\varepsilon)\big)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Pair}},1}^{\mbox{\tiny{\sc In}}}(\varepsilon,\varepsilon')\,\bar{\mathcal{I}}^{\mu}(\varepsilon')\,dV_{\varepsilon'} \nonumber \\ &\hspace{12pt} -\mathcal{I}_{j}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Pair}},0}^{\mbox{\tiny{\sc Out}}}(\varepsilon,\varepsilon')\,\bar{\mathcal{D}}(\varepsilon')\,dV_{\varepsilon'} -\widehat{\mathcal{K}}_{j\mu}(\varepsilon)\int_{0}^{\infty}\Phi_{\mbox{\tiny{\sc Pair}},1}^{\mbox{\tiny{\sc Out}}}(\varepsilon,\varepsilon')\,\bar{\mathcal{I}}^{\mu}(\varepsilon')\,dV_{\varepsilon'} \nonumber \end{align} for the first moment. Here, $\bar{\mathcal{D}}$ and $\bar{\mathcal{I}}^{\mu}$ are the zeroth and first moments of the antineutrino distribution function $\bar{f}$. \subsection{Neutrino--matter coupling} In coupling neutrinos and matter, we are primarily concerned with lepton and four-momentum exchange. The neutrino lepton current density is \begin{equation} J_{\mbox{\tiny neutrino}}^{\nu} = \sum_{s=\nu_{e},\bar{\nu}_{e}}\mathsf{g}_{s}\,N_{s}^{\nu}, \label{eq:neutrinoLeptonCurrentDensity} \end{equation} where $N_{s}^{\nu}$ is the neutrino four-current density for neutrino species $s$, defined as in Eq.~\eqref{eq:numberMoments} with distribution function $f_{s}$, and $\mathsf{g}_{s}$ is the lepton number of neutrino species $s$ ($\mathsf{g}_{s}=+1$ for neutrinos, and $\mathsf{g}_{s}=-1$ for antineutrinos). From the electron number conservation equation, Eq.~\eqref{eq:ElectronNumberConservation}, and the neutrino number conservation equation, Eq.~\eqref{eq:numberEquation} (one for each neutrino species), we obtain \begin{equation} \nabla_{\nu}\big(\,J_{\mbox{\tiny neutrino}}^{\nu} + J_{e}^{\nu}/m_{\mbox{\tiny B}}\,\big) = \sum_{s=\nu_{e},\bar{\nu}_{e}}\mathsf{g}_{s}\,\int_{V_{p}}\mathcal{C}[f_{s}]\,\pi_{m} - L, \label{eq:totalLeptonConservation} \end{equation} Lepton number conservation demands that the source term of the right-hand side of Eq.~\eqref{eq:ElectronNumberConservation} takes the form \begin{equation} \label{eq:electronfractionequationsourceterm} L = \sum_{s=\nu_{e},\bar{\nu}_{e}}\mathsf{g}_{s}\int_{V_{p}}\mathcal{C}[f_{s}]\,\pi_{m}. \end{equation} Note that, for simplicity of this exposition, we have assumed that only electron neutrinos and antineutrinos are involved in lepton exchange with the fluid, but see \citet{BoJaLo17} for a discussion of additional lepton exchange channels when muons are included as a fluid component. When muons are included, an additional equation for the muon number density, similar to Eq.~\eqref{eq:ElectronNumberConservation}, must be evolved, and the definition of the neutrino lepton current density in Eq.~\eqref{eq:neutrinoLeptonCurrentDensity} must be extended to include contributions from muon neutrinos. (Technically, similar extensions should be done to accommodate tauons, but, because of their large rest mass, they can be neglected as an agent for lepton number exchange with the fluid \citep{BoJaLo17}.) The total neutrino stress-energy tensor is defined as \begin{equation} T_{\mbox{\tiny neutrino}}^{\mu\nu} =\sum_{s=1}^{N_{\mbox{\tiny{\sc Sp}}}}T_{s}^{\mu\nu}, \end{equation} where the stress-energy tensor for neutrino species $s$, $T_{s}^{\mu\nu}$, is defined as in Eq.~\eqref{eq:stressEnergyMoments} with distribution function $f_{s}$ and $N_{\mbox{\tiny{\sc Sp}}}$ is the total number of neutrino species. Using Eqs.~\eqref{eq:fluidFourMomentumConservation} and \eqref{eq:fourMomentumEquation}, the divergence of the total (fluid plus neutrino) stress-energy is \begin{equation} \nabla_{\nu}\big(\,T_{\mbox{\tiny neutrino}}^{\mu\nu}+T_{\mbox{\tiny fluid}}^{\mu\nu}\,\big) =\sum_{s=1}^{N_{\mbox{\tiny{\sc Sp}}}}\int_{V_{p}}\mathcal{C}[f_{s}]\,p^{\mu}\,\pi_{m} - G^{\mu}. \end{equation} Then, four-momentum conservation in neutrino--matter interactions demands the right-hand side of Eq.~\eqref{eq:fluidFourMomentumConservation} takes the form \begin{equation} \label{eq:fourmomentumequationsourceterm} G^{\mu} = \sum_{s=1}^{N_{\mbox{\tiny{\sc Sp}}}}\int_{V_{p}}\mathcal{C}[f_{s}]\,p^{\mu}\,\pi_{m}. \end{equation} To illustrate the complexity of the neutrino--matter coupling problem further, we consider the neutrino--matter coupling problem in the space-homogeneous case using the number conservative two-moment model discussed in Sect.~\ref{sec:TwoMoment}. The angular moments of the neutrino distribution function of species $s$ evolve according to \begin{align} d_{t}\big(\,\sqrt{\gamma}\,\big[\,W\,\mathcal{D}_{s}+v^{i}\,\mathcal{I}_{s,i}\,\big]\,\big) &= \f{\alpha\,\sqrt{\gamma}}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}[f_{s}]\,\f{d\omega}{\varepsilon}, \label{eq:spectralNumberEquationSpaceHomogeneous} \\ d_{t}\big(\,\sqrt{\gamma}\,\big[\,W\,\mathcal{I}_{s,j}+v^{i}\,\widehat{\mathcal{K}}_{s,ij}\,\big]\,\big) &= \f{\alpha\,\sqrt{\gamma}}{4\pi}\int_{\mathbb{S}^{2}}\mathcal{C}[f_{s}]\,\ell_{j}\,\f{d\omega}{\varepsilon}, \label{eq:spectralNumberFluxEquationSpaceHomogeneous} \end{align} where we use the ordinary derivative $d_{t}=d/dt$ to indicate that we consider the space-homogeneous case where physical variables are considered functions of time only. The right-hand sides of Eqs.~\eqref{eq:spectralNumberEquationSpaceHomogeneous} and \eqref{eq:spectralNumberFluxEquationSpaceHomogeneous} will include the contributions from emission and absorption, scattering, pair processes (as discussed above), and other processes. This sub-problem is typically considered in numerical implementations where neutrino--matter interactions are solved for in a time-implicit fashion, e.g., as is done within an implicit-explicit framework for integrating the full neutrino-radiation hydrodynamics system forward in time, which we will discuss in more details later (see, e.g., Sect.~\ref{sec:numericalTwoMomentKinetics}). Coupled to the transport equations, are the fluid evolution equations, which are combined with the transport equations and formulated as constraints due to mass, four-momentum, and lepton number conservation: \begin{align} d_{t}\big(\,\sqrt{\gamma}\,D\,\big) &=0, \label{eq:massConservationConstraint} \\ d_{t}\big(\,\sqrt{\gamma}\,\big[\,S_{j}+S_{j,\mbox{\tiny neutrino}}\,\big]\,\big) &=0, \label{eq:momentumConservationConstraint} \\ d_{t}\big(\,\sqrt{\gamma}\big[\,\tau_{\mbox{\tiny fluid}}+E_{\mbox{\tiny neutrino}}\,\big]\,\big) &=0, \label{eq:energyConservationConstraint} \\ d_{t}\big(\,\sqrt{\gamma}\,\big[\,N_{e}+N_{\mbox{\tiny neutrino}}\,\big]\,\big) &=0, \label{eq:leptonNumberConservationConstraint} \end{align} where $N_{e}=D\,Y_{e}/m_{\mbox{\tiny B}}$, and \begin{align} S_{j,\mbox{\tiny neutrino}} &=\sum_{s=1}^{N_{\mbox{\tiny{\sc Sp}}}}\int_{0}^{\infty}\mathcal{F}_{j,s}\,dV_{\varepsilon}, \\ E_{\mbox{\tiny neutrino}} &=\sum_{s=1}^{N_{\mbox{\tiny{\sc Sp}}}}\int_{0}^{\infty}\mathcal{E}_{s}\,dV_{\varepsilon}, \\ N_{\mbox{\tiny neutrino}} &=\sum_{s=\nu_{e},\bar{\nu}_{e}}\mathsf{g}_{s}\int_{0}^{\infty}\mathcal{N}_{s}\,dV_{\varepsilon}, \end{align} and where the Eulerian angular moments $\mathcal{F}_{s,j}$, $\mathcal{E}_{s}$, and $\mathcal{N}_{s}$ are defined in Section~\eqref{sec:EulerianDecompositions}. The Eulerian neutrino number density $\mathcal{N}_{s}$ is expressed in terms of the Lagrangian moments in Eq.~\eqref{eq:eulerianNumberInTermsOfLagrangianMoments}, which is also the expression inside the time-derivative on the left-hand side of Eq.~\eqref{eq:spectralNumberEquationSpaceHomogeneous}. The Eulerian momentum and energy, can also be written as combinations of the quantities in the time-derivatives on the left-hand side of Eqs.~\eqref{eq:spectralNumberEquationSpaceHomogeneous} and \eqref{eq:spectralNumberFluxEquationSpaceHomogeneous}: \begin{align} \mathcal{F}_{j,s} &=\varepsilon\, \big\{\, W\,v_{j}\,\big[\,W\,\mathcal{D}_{s}+v^{i}\,\mathcal{I}_{s,i}\,\big] +\big[\,W\,\mathcal{I}_{s,j}+v^{i}\,\widehat{\mathcal{K}}_{s,ij}\,\big] \,\big\}, \\ \mathcal{E}_{s} &=\varepsilon\, \big\{\, W\,\big[\,W\,\mathcal{D}_{s}+v^{i}\,\mathcal{I}_{s,i}\,\big] +v^{j}\,\big[\,W\,\mathcal{I}_{s,j}+v^{i}\,\widehat{\mathcal{K}}_{s,ij}\,\big] \,\big\}. \end{align} Thus, adopting a closure for the radiation moments, writing $\widehat{\mathcal{K}}_{s,ij}$ in terms of $\mathcal{D}_{s}$ and $\mathcal{I}_{s,j}$ as discussed in Section~\eqref{sec:closure}, and an equation of state for the fluid $p=p(\rho,e,Y_{e})$, the system given by Eqs.~\eqref{eq:spectralNumberEquationSpaceHomogeneous}-\eqref{eq:spectralNumberFluxEquationSpaceHomogeneous} and \eqref{eq:massConservationConstraint}-\eqref{eq:leptonNumberConservationConstraint} can be solved for the radiation moments $\mathcal{D}_{s}$ and $\mathcal{I}_{s,j}$, and the fluid states $\rho$, $v^{i}$, $e$, and $Y_{e}$. This is a nonlinear system of equations, where nonlinearities are due to the radiation moment closure, the fluid equation of state, the dependence of $D$, $S_{j}$, and $\tau_{\mbox{\tiny fluid}}$ on $\rho$, $v^{i}$, $e$, and $Y_{e}$, and the nonlinear dependence of the neutrino opacities discussed in Sect.~\ref{sec:collisionTermsTwoMoment} on the thermodynamic state $\rho$, $e$, and $Y_{e}$. Modeling this four-momentum and lepton exchange between neutrinos and the fluid --- with all the relevant neutrino--matter interactions included --- constitutes the major computational cost of core-collapse supernova models. \section{Phase-space discretizations and implementations} \subsection{Boltzmann kinetics: spatial and energy finite differencing plus discrete ordinates} \subsubsection{Phase-space coordinates} \label{sec:PhaseSpaceCoordinates} \begin{figure}[htb] \includegraphics[width=\textwidth]{momspacevar} \caption{Diagram illustrating the spherical momentum-space coordinates used in most neutrino radiation hydrodynamics implementations. The angle $\theta_p$ is the angle between the outgoing radial direction and the neutrino propagation direction, at the neutrino's location. The neutrino direction cosine, $\mu\equiv\cos\theta_p$, is defined in terms of it. $\phi_p$ is the associated momentum-space azimuthal angle. In spherical symmetry, the distribution function is only a function of $\mu$, not $\phi_p$.} \label{fig:momspacevar} \end{figure} In a spherical spatial coordinate system, the neutrino's direction of propagation is specified relative to the basis vectors $\{\mathbf{e}_{r,\theta,\phi}\}$ as (see Fig.~\ref{fig:momspacevar}) \begin{equation} \mathbf{n}=(n^{r},n^{\theta},n^{\phi}), \label{eq:componentsofn} \end{equation} where \begin{equation} n^{r}=\cos\theta_{p}, \end{equation} \begin{equation} n^{\theta}= \sin\theta_{p}\cos\phi_{p}, \end{equation} \begin{equation} n^{\phi}= \sin\theta_{p}\sin\phi_{p}. \end{equation} This can be rexpressed as \begin{equation} n^{r}= \mu, \end{equation} \begin{equation} n^{\theta}= (1 - \mu^2)^{\frac{1}{2}}\cos\phi_{p}, \end{equation} \begin{equation} n^{\phi} = (1 - \mu^2)^{\frac{1}{2}}\sin\phi_{p}, \end{equation} where $\mu\equiv\cos\theta_{p}$. When spherical spatial and momentum-space coordinates are used, as defined above, the neutrino distribution function has the following dependencies for no imposed symmetry, axisymmetry, and spherical symmetry, \begin{equation} f=f(r,\theta,\phi,\mathbf{n},E,t)=f(r,\theta,\phi,\mu,\phi_{p},E,t), \end{equation} \begin{equation} f=f(r,\theta,\mathbf{n},E,t)=f(r,\theta,\mu,\phi_{p},E,t), \end{equation} \begin{equation} f=f(r,\mathbf{n},E,t)=f(r,\mu,E,t), \end{equation} respectively, where in all three cases $E$ is the neutrino energy. \subsubsection{Spherical symmetry} We illustrate the approach used by \citet{MeBr93b,MeMe99,LiMeMe04,MeLiCa04,MeLiCa06} in the context of a model that assumes Newtonian gravity and is valid to $\mathcal{O}(v/c)$. The fully general relativistic case is detailed in \citet{LiMeMe04}. In the Newtonian gravity, $\mathcal{O}(v/c)$ case, the conservative neutrino Boltzmann equation reads \begin{eqnarray} \label{eq:boltzeq} & & \frac{1}{c}\frac{\partial F}{\partial t} + 4\pi \mu \frac{\partial (r^{2}\rho F)}{\partial m} + \frac{1}{r}\frac{\partial [(1-\mu ^{2})F]}{\partial \mu } \\ \nonumber & + & \frac{1}{c}(\frac{\partial \rm{ln}\rho }{\partial t}+\frac{3v}{r}) \frac{\partial [\mu (1-\mu ^{2})F]}{\partial \mu } + \frac{1}{c}[\mu ^{2}(\frac{\partial \rm{ln}\rho }{\partial t}+\frac{3v}{r})-\frac{v}{r}] \frac{1}{E^{2}}\frac{\partial (E^{3}F)}{\partial E} \nonumber \\ & = & \frac{j}{\rho }-\tilde{\chi }F \nonumber \\ & + & \frac{1}{c}\frac{1}{h^{3}c^{3}}E^{2}\int d\mu 'R_{{\rm IS}}F - \frac{1}{c}\frac{1}{h^{3}c^{3}}E^{2}F\int d\mu 'R_{{\rm IS}} \nonumber \\ & + & \frac{1}{h^{3}c^{4}} (\frac{1}{\rho }-F) \int dE'E'^{2}d\mu ' \tilde{R}_{{\rm NIS}}^{{\rm in}} F - \frac{1}{h^{3}c^{4}} F \int dE'E'^{2}d\mu ' \tilde{R}_{{\rm NIS}}^{{\rm out}} (\frac{1}{\rho }-F) \nonumber \\ & + &\frac{1}{h^{3}c^{4}} (\frac{1}{\rho }-F) \int dE'E'^{2}d\mu ' \tilde{R}_{{\rm PAIR}}^{{\rm em}} (\frac{1}{\rho }-\bar{F}) - \frac{1}{h^{3}c^{4}} F \int dE'E'^{2}d\mu ' \tilde{R}_{{\rm PAIR}}^{{\rm abs}} \bar{F} \nonumber , \end{eqnarray} where $F\equiv f/\rho$, $m$ is the Lagrangian mass coordinate, $\mu$ is the neutrino direction cosine, as defined above, and $E$ is the neutrino energy. In spherical symmetry, $F=F(t,m,\mu,E)$. After the time derivative term on the left-hand side of the Boltzmann equation, the remaining terms correspond to the transport of neutrinos in all three dimensions of phase space: $(m,\mu,E)$. The first term corresponds to spatial transport of neutrinos through the stellar core layers. As a neutrino propagates through the core, its direction cosine, defined in spherical coordinates with respect to the outward radial vector at its position, changes. This is captured by the second term. The third and fourth terms capture the transport of neutrinos in angle and energy due to relativistic (in this case to $\mathcal{O}(v/c)$) angular aberration and frequency shift, respectively. On the right-hand side, the collision term includes (1) thermal emission, with emissivity, $j$, (2) absorption, with absorption opacity $\tilde{\chi}\equiv j+\chi$, which accounts for stimulated absorption, (3) iso-energetic scattering, with scattering kernel $R_{\rm IS}$, (4) non-isoenergetic scattering, with scattering kernel, $R_{\rm NIS}$, and (5) neutrino pair creation and annihilation, with pair-production kernel, $R_{\rm PAIR}$. The distribution function for antineutrinos are designated by $\bar{F}$. While the left-hand side of the Boltzmann equation is linear in the distribution functions, it is important to note that the right-hand side is not. The nonlinearity on the right-hand side is evident due to the blocking factors corresponding to the boundedness of the neutrino distribution functions: $f$ lies in the range $[0,1]$. There is an additional nonlinearity that is implicit in the equation. The distribution functions are updated together with the matter internal energy and electron fraction, due to energy and lepton number exchange between the neutrinos and the matter as a result of the above processes. In turn, the neutrino emissivity, opacity, and scattering kernels depend on the thermodynamic state of the matter, which depends on the matter's density, internal energy, and electron fraction. Thus, a simultaneous linearization of the discretized equations of neutrino radiation hydrodynamics in the neutrino distribution functions, the matter internal energy, and the matter electron fraction is required. The finite differencing of the time derivative of the neutrino distribution function in Eq.~\eqref{eq:boltzeq} is straightforward: \begin{equation} \label{eq:eq_ct_fd} \frac{\partial F}{\partial t}=\frac{F_{i',j',k'}-{F}_{i',j',k'}^{n}}{dt}. \end{equation} For simplicity, we define the zone-center indices for each of the phase space dimensions with primed indices: $i^{'}\equiv i+1/2$, $j^{'}\equiv j+1/2$, and $k^{'}\equiv k+1/2$. Focusing now on the spatial advection term, the first of the $\mathcal{O}(1)$ terms: In the free streaming limit, the advected neutrino number in a time step (as measured by a comoving observer) can be large relative to the neutrino number in a zone (mass shell). Upwind differencing of the advection term is appropriate to limit destabilizing errors in the fluxes. For discrete direction cosines, \( \mu _{j'} \), the direction of the neutrino ``wind'' is given by the sign of \( \mu _{j'} \). On the other hand, in diffusive conditions, the neutrino flux may be orders of magnitude smaller than the nearly isotropic neutrino density in a zone. In this situation, an asymmetric differencing can lead to an overestimation of the first angular moment because of improper cancellations among the contributions of the nearly isotropic neutrino radiation field. As a result, Mezzacappa et~al.\ interpolate between upwind differencing in free streaming regimes and centered differencing in diffusive regimes. Specifically, using the coefficients, \( \beta _{i,k'} \), defined as \begin{equation} \label{eq:eq_transport_coefficients_fd} \beta _{i,k'}=\left\{ \begin{array}{cc} 1/2 & {\rm if}\quad 2dr_{i}>\lambda _{i,k'},\\ \left( 2dr_{i}/\lambda _{i,k'}+1\right) ^{-1} & {\rm otherwise}, \end{array}\right. \end{equation} where $\lambda_{i,k}$ is the angle-averaged neutrino mean free path, the spatial advection term is discretized as \begin{equation} \label{eq:eq_fd_da} \mu\frac{\partial r^{2}\rho F}{\partial m}=\frac{\mu _{j'}}{dm_{i'}}\left[ 4\pi r^{2}_{i+1}\rho _{i+1}F_{i+1,j',k'} -4\pi r^{2}_{i}\rho _{i}F_{i,j',k'}\right] \end{equation} with \begin{equation} \rho _{i}F_{i,j',k'}=\beta _{i,k'}\rho _{i'-1}F_{i'-1,j',k'}+\left( 1-\beta _{i,k'}\right)\rho _{i'}F_{i',j',k'} \label{eq:eq_Fi_interpolation_out} \end{equation} for outward propagating neutrinos \( \left( \mu _{j'}>0\right) \) and \begin{equation} \label{eq:eq_Fi_interpolation_in} \rho _{i}F_{i,j',k'}=\left( 1-\beta _{i,k'}\right)\rho _{i'-1}F_{i'-1,j',k'}+\beta _{i,k'}\rho _{i'}F_{i',j',k'} \end{equation} for inward propagating neutrinos \( \left( \mu _{j'}<0\right) \). Next, focusing on the angular advection term, Mezzacappa et~al.\ use the following discretization: \begin{equation} \label{eq:eq_dmu_fd} \frac{\partial [(1-\mu^{2})F]}{r\partial\mu} =\frac{3\left[ r^{2}_{i+1}-r^{2}_{i}\right] }{2\left[ r^{3}_{i+1}-r^{3}_{i}\right]} \frac{1}{w_{j'}}\left( \zeta _{j+1}F_{i',j+1,k'}-\zeta _{j}F_{i',j,k'}\right) . \end{equation} The differencing of the coefficients, \( \zeta =1-\mu ^{2} \), is defined by \begin{equation} \label{eq:eq_def_angular_diff_coff} \zeta _{j+1}-\zeta _{j}=-2\mu _{j'}w_{j'}, \end{equation} where the $w_{j'}$ are the weights corresponding to the Gaussian quadrature values used for $\mu_{j'}$. The discretization of the coefficient, $1/r$, of the angular advection term is set such that in an infinite homogenous medium in thermal equilibrium, $\rho F= f_{\rm eq} =$ constant is a solution \citep{MeBr93b}. The angular integration of the term $\partial [(1-\mu^{2})F]/r\partial\mu$ produces the zeroth and second angular moments of the neutrino distribution function. Its finite difference representation is therefore not as sensitive to cancellations in the diffusive limit as the differencing of the spatial advection term. Upwind differencing is justified. The angular ``wind'' always points towards \( \mu =1 \). However, for reasons of completeness and consistency, Mezzacappa et~al.\ use centered differencing in the diffusive regime here as well, with angular coefficients, \( \gamma _{i',k'}\equiv \beta _{i',k'} \), and \begin{equation} \label{eq:eq_Fj_interpolation} F_{i',j,k'}=\gamma _{i',k'}F_{i',j'-1,k'}+\left( 1-\gamma _{i',k'}\right) F_{i',j',k'}. \end{equation} Finally, Mezzacappa et~al.\ discretize the last of the $\mathcal{O}(1)$ terms in the Boltzmann equation, the collision term, as \begin{eqnarray} & & \frac{j^{n+1}_{i^{'},k^{'}}}{\rho^{n+1}_{i^{'}}}-\tilde {\chi }^{n+1}_{i^{'},k^{'}} \, F_{i^{'},j^{'},k^{'}} \nonumber \\ & + & \frac{1}{ch^{3}c^{3}}\, E_{k^{'}}^{2}\sum_{l=1}^{jmax}w_{l^{'}}\, (R_{\rm IS})^{n+1}_{i^{'},j^{'},l^{'},k^{'}} \, F_{i^{'},l^{'},k^{'}} - \frac{1}{ch^{3}c^{3}}\, E_{k^{'}}^{2}\, F_{i^{'},j^{'},k^{'}} \sum_{l=1}^{jmax}w_{l^{'}}\, (R_{\rm IS})^{n+1}_{i^{'},j^{'},l^{'},k^{'}} \nonumber \\ & + & \frac{1}{ch^{3}c^{3}}\,(1/\rho^{n+1}_{i^{'}}-F_{i^{'},j^{'},k^{'}}) \sum_{m=1}^{kmax} \Delta E_{m^{'}} E_{m^{'}}^{2} \sum_{l=1}^{jmax}w_{l^{'}}\, \times (\tilde{R}^{\rm in}_{\rm NIS})^{n+1}_{i^{'},j^{'},l^{'},k^{'},m^{'}}\, F_{i^{'},l^{'},m^{'}} \nonumber \\ & - & \frac{1}{ch^{3}c^{3}}\,F_{i^{'},j^{'},k^{'}} \sum_{m=1}^{kmax} \Delta E_{m^{'}} E_{m^{'}}^{2} \sum_{l=1}^{jmax}w_{l^{'}}\, \times (\tilde{R}^{\rm out}_{\rm NIS})^{n+1}_{i^{'},j^{'},l^{'},k^{'},m^{'}}\, (1/\rho^{n+1}_{i^{'}}-F_{i^{'},l^{'},m^{'}}) \nonumber \\ & + & \frac{1}{ch^{3}c^{3}}\,(1/\rho^{n+1}_{i^{'}}-F_{i^{'},j^{'},k^{'}}) \sum_{m=1}^{kmax} \Delta E_{m^{'}} E_{m^{'}}^{2} \sum_{l=1}^{jmax}w_{l^{'}}\, \times (\tilde{R}^{\rm em}_{\rm PAIR})^{n+1}_{i^{'},j^{'},l^{'},k^{'},m^{'}}\, (1/\rho^{n+1}_{i^{'}}-\bar{F}_{i^{'},l^{'},m^{'}}) \nonumber \\ & - & \frac{1}{ch^{3}c^{3}}\,F_{i^{'},j^{'},k^{'}} \sum_{m=1}^{kmax} \Delta E_{m^{'}} E_{m^{'}}^{2} \sum_{l=1}^{jmax}w_{l^{'}}\, \times (\tilde{R}^{\rm abs}_{\rm PAIR})^{n+1}_{i^{'},j^{'},l^{'},k^{'},m^{'}}\, \bar{F}_{i^{'},l^{'},m^{'}} \label{eq:collfd} \end{eqnarray} It is important to note that the collision term is differenced implicitly with respect to time. All of the neutrino and antineutrino distribution functions in Eq.~\eqref{eq:collfd} are evaluated at the new time step. Given the implementation of discrete ordinates in angle, the angular integrals in the collision term are evaluated with Gaussian quadrature, using the same quadrature set used for the angular discretizations of the distribution function and terms on the left-hand side of the Boltzmann equation. \subsubsection{Challenges: relativistic effects and the simultaneous conservation of lepton number and energy} \label{sec:relativisticEffectsAndConservationOfEnergy} Define \begin{eqnarray} J^{N} & = & \int ^{1}_{-1}\int ^{\infty }_{0}FE^{2}dEd\mu ,\\\label{eq:JN} H^{N} & = & \int ^{1}_{-1}\int ^{\infty }_{0}FE^{2}dE\mu d\mu \label{eq:HN}. \end{eqnarray} \( J^{N} \) and \( H^{N} \) are the zeroth and first angular \emph{number} moments of the distribution function. Integration of Eq.~\eqref{eq:boltzeq} over $\mu$ and $E$ with $E^{2}$ as the measure of integration gives the following evolution equation for $J^{N}$: \begin{equation} \label{eq:eq_neutrino_number_conservation} \frac{\partial J^{N}}{\partial t}+\frac{\partial }{\partial m}\left[ 4\pi r^{2}\rho H^{N}\right] -\int \frac{j}{\rho }E^{2}dEd\mu + \int \chi FE^{2}dEd\mu =0. \end{equation} One more integration over rest mass \( m \) from the center of the star to its surface gives the evolution equation for the total neutrino (lepton) number. It is clear from Eq.~\eqref{eq:eq_neutrino_number_conservation} that the total neutrino (lepton) number in the computational domain will change only as a result of an inflow or an outflow of neutrinos at the boundary of the domain and/or as a result of the exchange of lepton number between the neutrinos and the matter. Now, in the same way, define the energy moments: \begin{eqnarray} J^{E} & = & \int FE^{3}dEd\mu , \\\label{eq:JE} H^{E} & = & \int FE^{3}dE\mu d\mu , \\\label{eq:HE} K^{E} & = & \int FE^{3}dE\mu ^{2}d\mu . \label{eq:KE} \end{eqnarray} By taking the zeroth and first angular moments of the energy moment ($\int E^{3}dE\{ \partial F/\partial t = O[F]\}$) of the Boltzmann equation, the latter weighted by the fluid velocity, $v$,---i.e., $\int E^{3}dEd\mu \{ \partial F/\partial t = O[F]\}$ and $v\int E^{3}dEd\mu \mu \{ \partial F/\partial t = O[F]\}$--- one obtains two equations: \begin{eqnarray} \label{eq:eq_radiation_energy} \frac{\partial J^{E}}{\partial t} & + & \frac{\partial }{\partial m}\left[ 4\pi r^{2}\rho H^{E}\right] -\left( \frac{\partial \rm{\ln} \rho }{\partial t} +\frac{2v}{r}\right) K^{E}+\frac{v}{r}\left( J^{E}-K^{E}\right) \nonumber \\ & - & \int \frac{j}{\rho }E^{3}dEd\mu +\int \chi FE^{3}dEd\mu =0, \end{eqnarray} and \begin{eqnarray} \label{eq:eq_radiation_momentum} v\frac{\partial H^{E}}{\partial t} & + & \frac{\partial }{\partial m}\left[ 4\pi r^{2}v\rho K^{E}\right] -4\pi r^{2}\rho \frac{dv}{dm}K^{E}-\frac{v}{r}\left( J^{E}-K^{E}\right) \nonumber\\ & - & v\left( \frac{\partial \rm{\ln} \rho }{\partial t}+\frac{2v}{r}\right) H^{E} + v\int \chi FE^{3}dE\mu d\mu =0. \end{eqnarray} Eq.~\eqref{eq:eq_radiation_energy} is the evolution equation for the comoving-frame neutrino energy per gram. Eq.~\eqref{eq:eq_radiation_momentum} is the evolution equation for the comoving-frame neutrino momentum per gram. Combining the two, to $\mathcal{O}(v/c)$, one obtains the laboratory-frame neutrino energy conservation equation: \begin{eqnarray} 0 & = & \frac{\partial }{\partial t}\left( J^{E}+vH^{E}\right) +\frac{\partial }{\partial m}\left[ 4\pi r^{2}\rho \left( vK^{E}+H^{E}\right) \right] \nonumber \\ & - & \int \frac{j}{\rho }E^{3}dEd\mu +\int \chi FE^{3}dEd\mu +v\int \chi FE^{3}dE\mu d\mu. \label{eq:eq_ovc_radiation_energy_conservation} \end{eqnarray} Note that $J^{E}+vH^{E}$ is the laboratory-frame neutrino energy per gram as expressed in terms of the comoving-frame moments $J^{E}$ and $H^{E}$. Similarly, $vK^{E}+H^{E}$ is the laboratory-frame flux per gram expressed in terms of comoving-frame moments. Integration of Eq.~\eqref{eq:eq_ovc_radiation_energy_conservation} over enclosed mass leads to an equation for total neutrino energy conservation. It is clear that, with the exception again of fluxes at the boundary of the computational domain and energy exchange with the matter due to collisions (the terms involving $j$ and $\chi$) and neutrino stress (the term involving $v\chi$), the total neutrino energy as defined in the laboratory frame (where one can speak of conservation of energy) is conserved. In arriving at Eq.~\eqref{eq:eq_ovc_radiation_energy_conservation}, the expressions $(\partial \rm{ln}\rho /\partial t +2v/r)K^{E}$ and $K^{E}4\pi r^{2}\rho \partial v/\partial m$ in Eqs.~\eqref{eq:eq_radiation_energy} and \eqref{eq:eq_radiation_momentum} cancel given the continuity equation \begin{equation} \frac{\partial \rm{ln} \rho }{\partial t}+\frac{2v}{r}=-4\pi r^{2}\rho \frac{\partial v}{\partial m}. \label{eq:continuity} \end{equation} To achieve global energy conservation in the discrete limit, one must ensure, these cancellations occur in the finite differencing as well. Identifying the origin of the terms $(\partial \rm{ln}\rho /\partial t +2v/r)K^{E}$ and \( K^{E}4\pi r^{2}\rho \partial v/\partial m \), we find that $(\partial \rm{ln}\rho /\partial t +2v/r)K^{E}$ originates from the zeroth moment of the energy advection term, \begin{equation} \left[ \mu ^{2}\left( \frac{\partial \rm{ln} \rho }{\partial t}+\frac{2v}{r}\right) -\left( 1-\mu ^{2}\right) \frac{v}{r}\right] \frac{1}{E^{2}}\frac{\partial }{\partial E}\left( E^{3}F\right) , \label{eq:eq_observer_frequency} \end{equation} in the Boltzmann equation \eqref{eq:boltzeq}, and \( K^{E}4\pi r^{2}\rho \partial v/\partial m \) originates from the first moment of the spatial advection term, \begin{equation} \mu \frac{\partial \left( 4\pi r^{2}\rho F\right)}{\partial m}, \label{eq:spatialadvection} \end{equation} in the same equation. The terms \( \left( J^{E}-K^{E}\right) v/r \) also stem from the zeroth moment of the energy advection term, Eq.~\eqref{eq:eq_observer_frequency}, and the first moment of the angular advection term \begin{equation} \frac{1}{r}\frac{\partial \left[ \left( 1-\mu ^{2}\right) F\right]}{\partial \mu} \label{eq:angularadvection} \end{equation} in the Boltzmann equation \eqref{eq:boltzeq}. The requirement of global energy conservation in the laboratory frame therefore imposes interdependencies on the finite differencing of the $\mathcal{O}(1)$ spatial and angular advection terms, Eqs.~\eqref{eq:spatialadvection} and \eqref{eq:angularadvection}, and the $\mathcal{O}(v/c)$ energy advection term, Eq.~\eqref{eq:eq_observer_frequency} \citep{LiMeMe04}. In particular, given a choice of finite differencing of the $\mathcal{O}(1)$ terms on the left-hand side of the Boltzmann equation \eqref{eq:boltzeq}, conservation of energy requires ``matched'' finite differencing for the coefficients \begin{equation} A \equiv \frac{\partial \rm{ln} \rho }{\partial t}+\frac{2v}{r} \label{eq:Adef} \end{equation} and \begin{equation} B \equiv (1-\mu^{2})\frac{v}{r} \label{eq:Bdef} \end{equation} of the $\mathcal{O}(v/c)$ advection terms in the same equation. Mezzacappa et~al.\ begin by multiplying the discrete representation of the $\mathcal{O}(1)$ terms on the left-hand side of the Boltzmann equation \eqref{eq:boltzeq} by $1+\mu\bar{v}_{i+1}$ (in what follows, unless otherwise specified the indices are $i'$, $j'$, and $k'$): \begin{eqnarray} \label{eq:discrete1} & &(1+\mu\bar{v}_{i+1})E\frac{F-\bar{F}}{cdt} +(1+\mu\bar{v}_{i+1})E\frac{4\pi\mu}{dm}[\bar{r}^{2}_{i+1}\bar{\rho}_{i+1}F_{i+1}-\bar{r}^{2}_{i}\bar{\rho}_{i}F_{i}] \\ \nonumber &+&(1+\mu\bar{v}_{i+1})E\frac{3(\bar{r}^{2}_{i+1}-\bar{r}^{2}_{i})}{2(\bar{r}^{3}_{i+1}-\bar{r}^{2}_{3})}\frac{1}{w}[\zeta_{j+1}F_{j+1}-\zeta_{j}F_{j}] \\ \nonumber &=&\frac{(1+\mu v_{i+1})EF-(1+\mu\bar{v}_{i+1})E\bar{F}}{cdt} -\frac{\mu v_{i+1}EF-\mu\bar{v}_{i+1}EF}{cdt} \\ \nonumber &+&\frac{4\pi\mu}{dm}[(1+\mu\bar{v}_{i+1})E\bar{r}^{2}_{i+1}\bar{\rho}_{i+1}F_{i+1}-(1+\mu\bar{v}_{i})E\bar{r}^{2}_{i}\bar{\rho}_{i}F_{i}] \\ \nonumber & - & \frac{4\pi\mu^{2}}{dm}[\bar{v}_{i+1}\bar{r}^{2}_{i}\bar{\rho}_{i}EF_{i}-\bar{v}_{i}\bar{r}^{2}_{i}\bar{\rho}_{i}EF_{i}] \\ \nonumber &+&\frac{3(\bar{r}^{2}_{i+1}-\bar{r}^{2}_{i})}{2(\bar{r}^{3}_{i+1}-\bar{r}^{2}_{3})}\frac{1}{w}[\zeta_{j+1}EF_{j+1}-\zeta_{j}EF_{j}] \\ \nonumber & + & \frac{3(\bar{r}^{2}_{i+1}-\bar{r}^{2}_{i})}{2(\bar{r}^{3}_{i+1}-\bar{r}^{2}_{3})}\bar{v}_{i+1}\frac{1}{w}[\mu\zeta_{j+1}EF_{j+1}-\mu\zeta_{j}EF_{j}] \\ \nonumber &=&\frac{(1+\mu v_{i+1})EF-(1+\mu\bar{v}_{i+1})E\bar{F}}{cdt} - EF\frac{\mu v_{i+1}-\mu\bar{v}_{i+1}}{cdt} \\ \nonumber &+&\frac{4\pi\mu}{dm}[(1+\mu\bar{v}_{i+1})E\bar{r}^{2}_{i+1}\bar{\rho}_{i+1}F_{i+1}-(1+\mu\bar{v}_{i})E\bar{r}^{2}_{i}\bar{\rho}_{i}F_{i}] -\frac{4\pi\mu^{2}}{dm}\bar{r}^{2}_{i}\bar{\rho}_{i}EF_{i}[\bar{v}_{i+1}-\bar{v}_{i}] \\ \nonumber & + & \frac{3(\bar{r}^{2}_{i+1}-\bar{r}^{2}_{i})}{2(\bar{r}^{3}_{i+1}-\bar{r}^{2}_{3})}\frac{1}{w}[\zeta_{j+1}EF_{j+1}-\zeta_{j}EF_{j}] \\ \nonumber &+& \frac{3(\bar{r}^{2}_{i+1}-\bar{r}^{2}_{i})}{2(\bar{r}^{3}_{i+1}-\bar{r}^{2}_{3})} \bar{v}_{i+1}\frac{1}{w}[\mu\zeta_{j+1}EF_{j+1}-\mu\zeta_{j}EF_{j}]. \end{eqnarray} \noindent A bar over a variable indicates its value is to be taken at time step $t^n$. As noted, the total energy equation is obtained when summing Eqs.~\eqref{eq:eq_radiation_energy} and \eqref{eq:eq_radiation_momentum} and then integrating over $m$ (the integration in $\mu$ and $E$ has already taken place). In this sequence of integrations (over $\mu$, $E$, and then $m$), the term involving $A$ in Eq.~\eqref{eq:eq_radiation_energy} cancels with the term $-4\pi r^{2}\rho K^{E}dv/dm$ in Eq.~\eqref{eq:eq_radiation_momentum}. Identifying the appropriate velocity gradient term in Eq.~\eqref{eq:discrete1} and focusing on the appropriate integration (in this case, over $m$), Mezzacappa et~al.\ require that [below, the term involving $A$ comes from the zeroth moment of first term in the observer correction \eqref{eq:eq_observer_frequency} \emph{after an integration by parts in energy, $E$}; the term involving the velocity gradient is the next to last term in Eq.~\eqref{eq:discrete1}, corresponding to the first moment of the spatial propagation term in the Boltzmann equation \eqref{eq:boltzeq}]: \begin{eqnarray} & &\sum_{i=1,imax-1}\mu^2 A_{i'}F_{i'}dm_{i'} \nonumber \\ &-&\sum_{i=1,imax-1}4\pi\mu^{2}\bar{r}_{i}^{2}\bar{\rho}_{i}F_{i}(\bar{v}_{i+1}-\bar{v}_{i}) \nonumber \\ &=&\sum_{i=1,imax-1}\mu^2 A_{i'}F_{i'}dm_{i'} \nonumber \\ &-&\sum_{i=1,imax-1,j\leq jmax/2}4\pi\mu^{2}\bar{r}_{i}^{2}(\beta_{i}\bar{\rho}_{i'}F_{i'}+(1-\beta_{i})\bar{\rho}_{i'-1}F_{i'-1})(\bar{v}_{i+1}-\bar{v}_{i}) \nonumber \\ &-&\sum_{i=1,imax-1,j\geq jmax/2+1}4\pi\mu^{2}\bar{r}_{i}^{2}(\beta_{i}\bar{\rho}_{i'-1}F_{i'-1}+(1-\beta_{i})\bar{\rho}_{i'}F_{i'})(\bar{v}_{i+1}-\bar{v}_{i}) \nonumber \\ &=&\sum_{i=1,imax-1}\mu^2 A_{i'}F_{i'}dm_{i'} \nonumber \\ &-&\sum_{i=1,imax-1,j\leq jmax/2}4\pi\mu^{2}\bar{r}_{i}^{2}(\bar{v}_{i+1}-\bar{v}_{i})\beta_{i}\bar{\rho}_{i'}F_{i'} \nonumber \\ &-&\sum_{i=1,imax-2,j\leq jmax/2}4\pi\mu^{2}\bar{r}_{i+1}^{2}(\bar{v}_{i+2}-\bar{v}_{i+1})(1-\beta_{i+1})\bar{\rho}_{i'}F_{i'} \nonumber \\ &-&\sum_{i=1,imax-1,j\geq jmax/2+1}4\pi\mu^{2}\bar{r}_{i}^{2}(\bar{v}_{i+1}-\bar{v}_{i})(1-\beta_{i})\bar{\rho}_{i'}F_{i'} \nonumber \\ &-&\sum_{i=1,imax-2,j\geq jmax/2+1}4\pi\mu^{2}\bar{r}_{i+1}^{2}(\bar{v}_{i+2}-\bar{v}_{i+1})\beta_{i+1})\bar{\rho}_{i'}F_{i'}\nonumber \\ & = & 0, \end{eqnarray} \noindent which gives \begin{equation} A_{i',k'}=4\pi\frac{\bar{\rho}_{i'}}{dm_{i'}}(\bar{r}_{i}^{2}(\bar{v}_{i+1}-\bar{v}_{i})\beta_{i,k'} +\bar{r}_{i+1}^{2}(\bar{v}_{i+2}-\bar{v}_{i+1})(1-\beta_{i+1,k'})) \label{eq:coeffdiff1} \end{equation} for $j\leq jmax/2$ and \begin{equation} A_{i',k'}=4\pi\frac{\bar{\rho}_{i'}}{dm_{i'}}(\bar{r}_{i}^{2}(\bar{v}_{i+1}-\bar{v}_{i})(1-\beta_{i,k'}) +\bar{r}_{i+1}^{2}(\bar{v}_{i+2}-\bar{v}_{i+1})\beta_{i+1,k'}) \label{eq:coeffdiff2} \end{equation} for $j\geq jmax/2 + 1$. (The case $i=imax-1$ is a boundary case, the details of which are not important for the present discussion.) Similarly, defining $B^{'}$ according to \begin{equation} B_{i',j',k'}\equiv \frac{3}{2}\frac{\bar{r}_{i+1}^{2}-\bar{r}_{i}^{2}}{\bar{r}_{i+1}^{3}-\bar{r}_{i}^{3}}\bar{v}_{i+1}B^{'}_{j',k'}, \end{equation} and again focusing on the appropriate integration (in this case, over $\mu$), Mezzacappa et~al.\ require that (below, the term involving $B'$ comes from the zeroth moment of the second term in brackets in the energy advection term (\ref{eq:eq_observer_frequency}), \emph{after an integration by parts in angle, $\mu$}; the second term is the last term in Eq.~\eqref{eq:discrete1}, corresponding to the first moment of the angular advection term): \begin{eqnarray} \label{eq:coeffdiff3} 0 & = & \sum_{j=1,jmax}B^{'}_{j'}F_{j'}w_{j'} +\sum_{j=1,jmax}\frac{2}{w_{j'}}[\mu_{j'}\alpha_{j+1}F_{j+1}-\mu_{j'}\alpha_{j}F_{j}]w_{j'} \\ &=&\sum_{j=1,jmax}B^{'}_{j'}F_{j'}w_{j'} \nonumber \\ & + & \sum_{j=1,jmax}2[\mu_{j'}\alpha_{j+1}(\gamma F_{j'}+(1-\gamma )F_{j'+1}) -\mu_{j'}\alpha_{j} (\gamma F_{j'-1}+(1-\gamma )F_{j'})] \nonumber \\ &=&\sum_{j=1,jmax}B^{'}_{j'}F_{j'}w_{j'} \nonumber \\ &+&\sum_{j=1,jmax}2[\mu_{j'}\alpha_{j+1}\gamma -\mu_{j'}\alpha_{j}(1-\gamma )]F_{j'} \nonumber \\ & + & \sum_{j=2,jmax}2\mu_{j'-1}\alpha_{j}(1-\gamma )F_{j'} +\sum_{j=1,jmax-1}(-2)\mu_{j'+1}\alpha_{j+1}\gamma F_{j'} \nonumber \\ &=&\sum_{j=1,jmax}B^{'}_{j'}F_{j'}w_{j'} \nonumber \\ & + & \sum_{j=1,jmax}2\gamma \alpha_{j+1}(\mu_{j'}-\mu_{j'+1})F_{j'} +\sum_{j=1,jmax}2(1-\gamma )\alpha_{j} (\mu_{j'-1}-\mu_{j'})F_{j'}, \nonumber \end{eqnarray} which gives \begin{equation} B_{i',j',k'} =\frac{3}{2}\frac{\bar{r}_{i+1}^{2}-\bar{r}_{i}^{2}}{\bar{r}_{i+1}^{3}-\bar{r}_{i}^{3}}\bar{v}_{i+1} [2\gamma_{i',k'} \alpha_{j+1}\frac{\mu_{j'+1}-\mu_{j'}}{w_{j'}} +2(1-\gamma_{i',k'} )\alpha_{j} \frac{\mu_{j'}-\mu_{j'-1}}{w_{j'}}]. \label{eq:B} \end{equation} Given the necessary matched finite differencing for $A$ and $B$, Mezzacappa et~al.\ then consider the finite difference representation of the energy advection term (\ref{eq:eq_observer_frequency}). Using the definitions (\ref{eq:Adef}) and (\ref{eq:Bdef}), they rewrite the equation corresponding to the change in the distribution function due to relativistic energy advection as \begin{equation} 0=E^{3}\left( \frac{\partial F}{\partial t}\right) _{E}+ \left( \mu ^{2}A-B\right) E\frac{\partial }{\partial E}\left[ E^{3}F\right] , \label{eq:eq_lagrangian_energy_derivative} \end{equation} and then solve it analytically. To solve Eq.~\eqref{eq:eq_lagrangian_energy_derivative}, Mezzacappa et~al.\ write the prefactor of the energy derivative as the time derivative of the quantity \begin{equation} R_{f}=r^{3\mu^{2}-1}\rho^{\mu^2}; \end{equation} i.e., \begin{equation} \frac{\partial \rm{ln} R_{f}}{\partial t}=\mu ^{2}A-B. \end{equation} They then transform from the {}``Eulerian'' variable \( x=E \) to the {}``Lagrangian'' variable \( y=E/R_{f} \), and in so doing they transform Eq.~\eqref{eq:eq_lagrangian_energy_derivative}: \begin{eqnarray} 0 & = & \left( \frac{\partial }{\partial t}\left[ E^{3}F\right] \right) _{E}+\frac{\partial R_{f}}{R^{2}_{f}\partial t}E\times R_{f}\frac{\partial }{\partial E}\left[ E^{3}F\right] \nonumber \\ & = & \left( \frac{\partial }{\partial t}\left[ E^{3}F\right] \right) _{E}-\left( \frac{\partial \left[ E/R_{f}\right] }{\partial t}\right) _{E}\frac{\partial \left[ E^{3}F\right] }{\partial \left[ E/R_{f}\right] } \nonumber \\ & = &\left( \frac{\partial }{\partial t}\left[ E^{3}F\right] \right) _{E/R_{f}}. \end{eqnarray} For a small section of energy phase space \( E^{2}\Delta E=\left( E^{3}_{2}-E^{3}_{1}\right) /3 \), this relationship leads to \begin{equation} \left( \frac{\partial }{\partial t}\left[ E^{2}F\Delta E\right] \right) _{E/R_{f}}=0, \label{eq:eq_bunch_enumber_evolution} \end{equation} which has the following interpretation: The neutrinos in the energy interval \( E^{2}\Delta E \) move along constant \( E/R_{f} \) in the phase space of a comoving observer. Given this, Mezzacappa et~al.\ are able to evolve any neutrino quantity in this phase-space interval---in particular, the neutrino specific energy, \( d\epsilon =E^{3}F\Delta E \): \begin{equation} \left( \frac{\partial }{\partial t}\left[ E^{3}F\Delta E\right] \right)_{E/R_{f}} =E^{2}F\Delta E\left( \frac{\partial E}{\partial t}\right) _{E/R_{f}}=\frac{\partial \rm{ln} R_{f}}{\partial t}d\epsilon . \label{eq:eq_specific_energy_change} \end{equation} They then consider a neutrino energy group \( k' \), with neighboring groups \( k'+dk \), \( dk=\pm 1 \). From Eq.~\eqref{eq:eq_bunch_enumber_evolution}, the number of neutrinos before energy advection, \( F_{i',j',k'}E^{2}_{k'}dE_{k'} \), is equal to the number of neutrinos after advection. The distribution of these neutrinos in energy after the advection yields a diminished number of neutrinos \( F_{i',j',k'}E_{k'}^{2}dE_{k'}-n_{i',j',k'}^{-} \) in group \( k' \) and an additional number of neutrinos \( n_{i',j',k'+dk}^{+} \) in the neighboring group \( k'+dk \) such that \begin{equation} F_{i',j',k'}E_{k'}^{2}dE_{k'}-\left[ \left( F_{i',j',k'}E_{k'}^{2}dE_{k'}-n^{-}_{i',j',k'}\right) +n^{+}_{i',j',k'+dk}\right] =0. \label{eq:eq_bunch_enumber_fd} \end{equation} Eq.~\eqref{eq:eq_specific_energy_change} defines a similar correction for the specific neutrino energy in group $k'$: \begin{eqnarray} F_{i',j',k'}E_{k'}^{3}dE_{k'} & - & \left[ \left( F_{i',j',k'}E_{k'}^{3}dE_{k'}-E_{k'}n^{-}_{i',j',k'}\right) +E_{k'+dk}n^{+}_{i',j',k'+dk}\right] \nonumber \\ & = & -\left( \mu ^{2}_{j'}A_{i',k'}-B_{i',j',k'}\right) F_{i',j',k'}E^{3}_{k'}dE_{k'}dt, \label{eq:eq_specific_energy_change_fd} \end{eqnarray} where $A_{i',k'}$ and $B_{i',j',k'}$ are given by Eqs.~\eqref{eq:coeffdiff1}, \eqref{eq:coeffdiff2}, and \eqref{eq:B}. Equations~\eqref{eq:eq_bunch_enumber_fd} and \eqref{eq:eq_specific_energy_change_fd} can be solved for $n^{-}_{i',j',k'}$ and $n^{+}_{i',j',k'}$: \begin{eqnarray} n^{-}_{i',j',k'} & = & \left( \mu _{j'}^{2}A_{i',k'}-B_{i',j',k'}\right) \frac{dE_{k'}}{E_{k'+dk}-E_{k'}}E^{3}_{k'}F_{i',j',k'}dt , \nonumber \\ n^{+}_{i',j',k'} & = & n_{i',j',k'-dk}^{-},\label{eq:eq_oe_deltaplusminus} \end{eqnarray} which leads, given the change in the neutrino distribution function in group $k'$ due to energy advection can be expressed as \begin{equation} F_{i',j',k'}=F_{i',j',k'}^{n}+\left( n^{+}_{i',j',k'}-n^{-}_{i',j',k'}\right) /\left( E_{k'}^{2}dE_{k'}\right), \end{equation} to the following finite difference representation of the energy advection term in the Boltzmann equation \eqref{eq:boltzeq}: \begin{eqnarray} \frac{1}{E^{2}_{k'}dE_{k'}} \left[ \left( \mu _{j'}^{2}A_{i',k'-dk}-B_{i',j',k'}\right) \frac{dE_{k'-dk}}{E_{k'}-E_{k'-dk}}E_{k'-dk}^{3}F_{i',j',k'-dk}\right. \nonumber \\ - \left. \left( \mu _{j'}^{2}A_{i',k'}-B_{i',j',k'}\right) \frac{dE_{k'}}{E_{k'+dk}-E_{k'}}E_{k'}^{3}F_{i',j',k'}\right] .\label{eq:eq_oe_fd} \end{eqnarray} Mezzacappa et~al.\ are then left with the task of finding a finite difference representation for the angular advection term in Eq.~\eqref{eq:boltzeq}. Their finite differencing of the energy advection term conserved specific neutrino energy. Their finite differencing of the angular advection term is designed to conserve specific neutrino luminosity. With \( \zeta =1-\mu ^{2} \), the angular aberration term can be rewritten as \begin{equation} (\frac{\partial F}{\partial t})_{\mu}=\left( A+B/\zeta \right) \frac{\partial }{\partial \mu }\left[ \zeta \mu F\right]. \label{eq:aberration} \end{equation} As before, Mezzacappa et~al.\ seek an analytic solution to Eq.~\eqref{eq:aberration}. To do so, they convert the prefactor of the angular derivative to a time derivative. For the quantity \( R_{a}=r^{3}\rho \), they find \begin{equation} \frac{d\rm{ln} R_{a}}{dt}=A+B/\zeta . \end{equation} They then rewrite Eq.~\eqref{eq:aberration} in terms of the {}``Lagrangian'' variable \( y=\zeta ^{-1/2}\mu /R_{a} \) instead of the {}``Eulerian'' variable \( x=\mu \). After multiplication by \( \zeta \mu \), Eq.~\eqref{eq:aberration} becomes: \begin{eqnarray} 0 & = & \zeta \mu \left[ \left( \frac{\partial F}{\partial t}\right) _{\mu }+\alpha \left( A+B/\zeta \right) \frac{\partial }{\partial \mu }\left[ \zeta \mu F\right] \right] \nonumber \\ & = & \left( \frac{\partial }{\partial t}\left[ \zeta \mu F\right] \right) _{\mu }+\zeta ^{-1/2}\mu \frac{\partial R_{a}}{R^{2}_{a}\partial t}\times \zeta ^{3/2}R_{a}\frac{\partial }{\partial \mu }\left[ \zeta \mu F\right] \nonumber \\ & = & \left( \frac{\partial }{\partial t}\left[ \zeta \mu F\right] \right) _{\mu } - \left( \frac{\partial \left[ \zeta ^{-1/2}\mu /R_{a}\right] }{\partial t}\right) _{\mu }\frac{\partial \left[ \zeta \mu F\right] }{\partial \left[ \zeta ^{-1/2}\mu /R_{a}\right] }\nonumber \\ & = & \left( \frac{\partial }{\partial t}\left[ \zeta \mu F\right] \right) _{\zeta ^{-1/2}\mu /R_{a}}. \end{eqnarray} As before, the interpretation is clear: The neutrinos initially residing in the interval \( \left( 1-3\mu ^{2}\right) \Delta \mu =\zeta _{2}\mu _{2}-\zeta _{1}\mu _{1} \) are shifted by angular aberration along constant \( \mu /\left( \sqrt{\zeta }R_{a}\right) \) in the phase space of a comoving observer: \begin{equation} \left( \frac{\partial }{\partial t}\left[ \left( 1-3\mu ^{2}\right) F\Delta \mu \right] \right) _{\zeta ^{-1/2}\mu /R_{a}}=0. \label{eq:eq_bunch_lnumber_evolution} \end{equation} Given Eq.~\eqref{eq:eq_bunch_lnumber_evolution}, Mezzacappa et~al.\ are in turn able to evaluate the change in other neutrino quantities---in particular, the specific neutrino luminosity, \( d\ell =\left( 1-3\mu ^{2}\right) \mu F\Delta \mu \): \begin{eqnarray} \left( \frac{\partial }{\partial t}\left[ \left( 1-3\mu ^{2}\right) \mu F\Delta \mu \right] \right)_{\zeta ^{-1/2}\mu /R_{a}} & = &\left( 1-3\mu ^{2}\right) F\Delta \mu \left( \frac{\partial \mu}{\partial t}\right)_{\zeta ^{-1/2}/R_{a}}\nonumber \\ & = &\zeta \frac{\partial \rm{ln} R_{a}}{\partial t}d\ell . \label{eq:eq_specific_luminosity_change} \end{eqnarray} Identifying their bin size \( \left( 1-3\mu _{j'}^{2}\right) \Delta \mu _{j'}=w_{j'} \) with their Gaussian quadrature weights, Eq.~\eqref{eq:eq_bunch_lnumber_evolution} leads to their condition for neutrino number conservation, \begin{equation} F_{i',j',k'}w_{j'}-\left[ \left( F_{i',j',k'}w_{j'}-n^{-}_{i',j',k'}\right) +n^{+}_{i',j'+dj,k'}\right] =0, \end{equation} and Eq.~\eqref{eq:eq_specific_luminosity_change} leads to their prescription for the numerical evolution of the specific luminosity, \begin{eqnarray} F_{i',j',k'}\mu _{j'}w_{j'} & - & \left[ \left( F_{i',j',k'}\mu _{j'}w_{j'}-\mu _{j'}n^{-}_{i',j',k'}\right) +\mu _{j'+dj}n^{+}_{i',j'+dj,k'}\right] \nonumber \\ & = & -\left( \zeta _{j'}A_{i',k'}+B_{i',j',k'}\right) F_{i',j',k'}\mu _{j'}w_{j'}dt, \end{eqnarray} where \( dj=\pm 1 \). The change in the neutrino distribution from angular aberration is then \begin{equation} F_{i',j',k'}=F_{i',j',k'}^{n}+\left( n_{i',j',k'}^{+}-n_{i',j',k'}^{-}\right) /w_{j'}, \label{eq:fdot_aberration} \end{equation} with \begin{eqnarray} n^{-}_{i',j',k'} & = & \left( A_{i',k'}+B_{i',j',k'}/\zeta _{j'}\right) \frac{w_{j'}}{\mu _{j'+dj}-\mu _{j'}}\zeta _{j'}\mu _{j'}F_{i',j',k'}dt, \nonumber \\ n^{+}_{i',j',k'} & = & n^{-}_{i',j'-dj,k'}. \end{eqnarray} This leads to the following finite difference representation of the angular aberration term in the Boltzmann equation \eqref{eq:boltzeq}: \begin{eqnarray} \frac{1}{w_{j'}}\left[ \left( A_{i',k'}+B_{i',j'-dj,k'}/\zeta _{j'-dj}\right) \frac{w_{j'-dj}}{\mu _{j'}-\mu _{j'-dj}}\zeta _{j'-dj}\mu _{j'-dj}F_{i',j'-dj,k'}\right. \nonumber \\ - \left. \left( A_{i',k'}+B_{i',j',k'}/\zeta _{j'}\right) \frac{w_{j'}}{\mu _{j'+dj}-\mu _{j'}}\zeta _{j'}\mu _{j'}F_{i',j',k'}\right] ,\label{eq:eq_omu_fd} \end{eqnarray} where \( dj=+1 \) for \( \mu_{j'} \leq 0 \) and \( dj=-1 \) for \( \mu_{j'} >0 \). Given the finite differencing for all of the terms in the Boltzmann equation \eqref{eq:boltzeq}---i.e., Eqs.~\eqref{eq:eq_ct_fd}, \eqref{eq:eq_fd_da}, \eqref{eq:eq_dmu_fd}, \eqref{eq:eq_omu_fd}, \eqref{eq:eq_oe_fd}, and \eqref{eq:collfd}---Mezzacappa et~al.\ solve the discretized equation as follows. With the exception of the discretized time derivative, which is a finite difference of the values of the distribution function at time step $t^{n+1}$ and $t^n$, the distribution function in all of the remaining terms is defined at time step $t^{n+1}$---i.e., Mezzacappa et~al.\ employ a fully implicit approach, including phase-space advection and collisions. Given the presence of blocking factors in the collision term and the presence of products of the distribution functions and the neutrino opacities, which are functions of the specific internal energy and electron fraction of the matter, which are updated together with the distribution functions given lepton number and energy exchange with the matter through collisions [see Equations (\ref{eq:fluidFourMomentumConservation}), (\ref{eq:ElectronNumberConservation}), (\ref{eq:electronfractionequationsourceterm}), and (\ref{eq:fourmomentumequationsourceterm})], linearization is necessary. Specifically, Mezzacappa et~al.\ introduce the linearizations \begin{eqnarray} F_{i',j',k'} & = & F^{0}_{i',j',k'}+\delta F_{i',j',k'}, \\ \label{eq:linearizationF} \epsilon_{i'} & = & \epsilon^{0}_{i'}+\delta\epsilon_{i'}, \\ \label{eq:linearizationepsilon} (Y_e)_{i'} & = & (Y_e)^{0}_{i'}+\delta (Y_e)_{i'}, \label{eq:linearizationYe} \end{eqnarray} where a $0$ superscript denotes the value of the variable at the current iterate in an outer Newton iteration of the solution algorithm. Given the dependence of $j$, $\tilde{\chi}$, $R_{\rm IS}$, $R_{\rm NIS}$, and $R_{\rm PAIR}$ on $\rho$, $T$, and $Y_e$, the above linearizations lead to linearizations in all of these quantities. For example: \begin{equation} j_{i',k'}=j^{0}_{i',k'}+\left[\left(\frac{\partial j}{\partial T}\right)_{\rho,Y_e}\right]^{0}_{i',k'}+\left[\left(\frac{\partial j}{\partial Y_e}\right)_{\rho,T}\right]^{0}_{i',k'}. \end{equation} Insertion of these linearizations into the finite differenced Boltzmann equation leads to a block-tridiagonal linear system of equations for the quantities $\delta F_{i',j',k'}$, $\delta\epsilon_{i'}$, and $\delta (Y_e)_{i'}$, which is solved for each outer iteration until a prescribed tolerance is reached for all of the variables. The block tridiagonal system has the form \begin{equation} -\mathbf{C}_{i}\mathbf{V}_{i-1}+\mathbf{A}_{i}\mathbf{V}_{i}-\mathbf{B}_{i+1}\mathbf{V}_{i+1}=\mathbf{U}_{i}, \label{eq:blocktridiag} \end{equation} where $\mathbf{B}_{i}$ and $\mathbf{C}_{i}$ are diagnoal, reflecting the fact that spatial advection couples nearest neighbors only, and where $\mathbf{A}_{i}$ has the form \begin{equation} \mathbf{A}_{i}= \left( \begin{array}{cc} A_{1} & A_{2} \\ A_{3} & A_{4} \end{array} \right). \label{eq:matrixA} \end{equation} $\mathbf{A}_{i}$ is an $M\times M$ matrix, where $M=jmax \times kmax +2$. $jmax$ corresponds to the number of angular quadratures used in the discrete ordinates implementation for angle, and $kmax$ corresponds to the number of energy groups. The submatrices $\mathbf{A}_{2}$ and $\mathbf{A}_{3}$ are of dimension $2\times (M-2)$ and $(M-2)\times 2$, respectively. $\mathbf{A}_{4}$ is a $2\times 2$ matrix. The 2 rightmost columns of $\mathbf{A}_{i}$ and the 2 bottom-most rows correspond to the coupling of the Boltzmann equation to the equations for the specific internal energy and electron fraction of the matter, accounting for energy and lepton number exchange. The solution vector, $\mathbf{V}_{i}$, comprising the quantities $\delta F_{i',j',k'}$, $\delta\epsilon_{i'}$, and $\delta (Y_e)_{i'}$, has the form % \begin{equation} \left( \begin{array}{c} \delta F_{i',1',1'} \\ \delta F_{i',2',1'} \\ . \\ . \\ . \\ \delta F_{i',1',2'} \\ \delta F_{i',2',2'} \\ . \\ . \\ . \\ \delta\epsilon_{i'} \\ \delta (Y_e)_{i'} \end{array} \right). \label{eq:solnvector} \end{equation} % \citep{DaMeMe05} developed a physics-based preconditioner for the above system. This ``ADI-like'' preconditioner treats the diagonal dense blocks, which correspond to coupling in momentum space, and the tridiagonal bands, which correspond to coupling in space, separately, and has proven very effective. For Mezzacappa et al., neutrino momentum exchange with the matter is handled separately, during the hydrodynamics update, and is differenced explicitly in time. \subsubsection{Challenges: neutrino--nucleon (small-energy) scattering} In the case of neutrino--electron scattering, for example, where the energy transfer is not small in comparison with the widths of the zones of our energy grid, Eq.~\eqref{eq:boltzeq} is differenced using zone-centered values of energy in both the neutrino distribution function and the scattering kernels. However, for small-energy transfers compared with our energy zone widths, the scattering kernel $R_{\rm NNS}^{\rm in/out}(\epsilon_{k}, \epsilon_{k'}, \cos\theta )$ will be effectively zero if $\epsilon_{k} \ne \epsilon_{k'}$, and the scattering will become essentially isoenergetic, with negligible energy transfer. As already discussed, while the transfer of energy between neutrinos and nucleons during a scattering event is small, there are many such scatterings, and the overall impact of the energy exchange between the neutrinos and nucleons in these events is nonnegligible. Thus, a numerical treatment of this scattering contribution that reflects the fact that the energy exchange between neutrinos and matter is important and, more important, captures this exchange accurately, must be developed. Focusing on this term in the collision term, we have \begin{eqnarray} & {\displaystyle \pderiv{ f(\mu, \epsilon) }{ t } = [ 1 - f(\mu, \epsilon) ] \frac{1}{(hc)^{3}} \int_{0}^{\infty} \epsilon'^{2} d\epsilon' \int_{-1}^{1} d\mu' f(\mu', \epsilon') \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm in}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle - f(\mu, \epsilon) \frac{1}{(hc)^{3}} \int_{0}^{\infty} \epsilon'^{2} d\epsilon' \int_{-1}^{1} d\mu' [1 - f(\mu', \epsilon') ] \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm out}(\epsilon, \epsilon', \cos\theta ), } & \label{eq:b1} \end{eqnarray} where, for simplicity, we have suppressed the radial and temporal dimensions. With the energy zone centers, $\epsilon_{k+1/2}$, defined in terms of the energy zone edges, $\epsilon_{k}$, as \begin{equation} \epsilon_{k+\frac{1}{2}} = \frac{1}{3} [ \epsilon_{k}^{2} + \epsilon_{k}\epsilon_{k+1} + \epsilon_{k+1}^{2} ], \label{eq:b2} \end{equation} the volume of an energy zone is given by \begin{equation} 4\pi \epsilon_{k+\frac{1}{2}}^{2} \Delta \epsilon_{k+\frac{1}{2}} = \frac{4\pi}{3} [ \epsilon_{k+1}^{3} - \epsilon_{k}^{3} ], \label{eq:b3} \end{equation} where \begin{equation} \Delta \epsilon_{k+\frac{1}{2}} = \epsilon_{k+1} - \epsilon_{k}. \label{eq:b4} \end{equation} The integral over energy can now be replaced by \begin{equation} \int_{0}^{\epsilon_{N+1}} \epsilon^{2} d\epsilon = \sum_{k=1}^{N} \epsilon_{k+\frac{1}{2}}^{2} \Delta \epsilon_{k+\frac{1}{2}}, \label{eq:b4} \end{equation} and Eq.~\eqref{eq:b1} becomes \begin{eqnarray} & {\displaystyle \left. \pderiv{ f(\mu, \epsilon) }{ t } \right|_{\rm scat} } & \nonumber \\ & {\displaystyle \simeq [ 1 - f(\mu, \epsilon) ] \frac{1}{(hc)^{3}} \int_{0}^{\epsilon_{N+1}} \epsilon '^{2} d\epsilon' \int_{-1}^{1} d\mu' f(\mu', \epsilon') \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm in}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle - f(\mu, \epsilon) \frac{1}{(hc)^{3}} \int_{0}^{\epsilon_{N+1}} \epsilon'^{2} d\epsilon' \int_{-1}^{1} d\mu' [1 - f(\mu', \epsilon') ] \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm out}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle = [ 1 - f(\mu, \epsilon) ] \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} {\epsilon'}^{2} d\epsilon' \int_{-1}^{1} d\mu' f(\mu', \epsilon') \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm in}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle - f(\mu, \epsilon) \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} \epsilon'^{2} d\epsilon' \int_{-1}^{1} d\mu' [1 - f(\mu', \epsilon') ] \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm out}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle = [ 1 - f(\mu, \epsilon) ] \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} {\epsilon'}^{2} d\epsilon' } & \nonumber \\ & {\displaystyle \times \int_{-1}^{1} d\mu' f(\mu', \epsilon') \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm in}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle - f(\mu, \epsilon) \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}}{\epsilon'}^{2} d\epsilon' } & \nonumber \\ & {\displaystyle \times \int_{-1}^{1} d\mu' [1 - f(\mu', \epsilon') ] \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm out}(\epsilon, \epsilon', \cos\theta ). } & \label{eq:b5} \end{eqnarray} In Eq.~\eqref{eq:b5}, the first approximation was made by truncating the energy integral at $\epsilon_{N+1}$. In the second equality, the integral over the entire energy domain is broken up into segments within the domain, corresponding to the energy zone widths. This is not an approximation. In the last equality, we have inserted a factor of unity inside the summation over energy groups, which, again, is not an approximation. Therefore, no approximations have been made thus far except for truncating the range of the energy integration. The ultimate goal of an improved treatment of small-energy, neutrino--nucleon scattering is to accurately compute the energy transfer between the neutrinos and the nucleons---i.e., to compute accurately the change in the neutrino energy within each of the groups of our energy grid from such scattering. The change in the neutrino energy within a group is given by \begin{eqnarray} & {\displaystyle \left. \pderiv{ E_{k+\frac{1}{2}} }{ t } \right|_{\rm scat} = \frac{1}{(hc)^{3}} \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon \left. \pderiv{ f(\mu, \epsilon) }{ t } \right|_{\rm scat} } & \nonumber \\ & {\displaystyle = \frac{1}{(hc)^{3}} \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon [ 1 - f(\mu, \epsilon) ] \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} {\epsilon'}^{2} d\epsilon' } & \nonumber \\ & {\displaystyle \times \int_{-1}^{1} d\mu' f(\mu', \epsilon') \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm in}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle - \frac{1}{(hc)^{3}} \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon f(\mu, \epsilon) \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}}} \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} {\epsilon'}^{2} d\epsilon' }& \nonumber \\ & {\displaystyle \times \int_{-1}^{1} d\mu' [1 - f(\mu', \epsilon') ] \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm out}(\epsilon, \epsilon', \cos\theta ), } & \label{eq:b7} \end{eqnarray} where we have inserted Eq.~\eqref{eq:b5} for the time derivative of the neutrino distribution function due to neutrino--nucleon scattering. If we now choose to define the distribution function, $f(\mu,\epsilon)$, at the energy zone centers, Eq.~\eqref{eq:b7} can be expressed as \begin{eqnarray} & {\displaystyle \frac{1}{(hc)^{3}} \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon \left. \pderiv{ f(\mu, \epsilon_{k+\frac{1}{2}}) }{ t } \right|_{\rm scat} = \left. \pderiv{ f(\mu, \epsilon_{k+\frac{1}{2}}) }{ t } \right|_{\rm scat} \frac{1}{(hc)^{3}} \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon } & \nonumber \\ & {\displaystyle = \frac{1}{(hc)^{3}} \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon [ 1 - f(\mu, \epsilon_{k+\frac{1}{2}}) ] \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} {\epsilon'}^{2} d\epsilon' } & \nonumber \\ & {\displaystyle \times \int_{-1}^{1} d\mu' f(\mu', \epsilon_{k'+\frac{1}{2}}) \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm in}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle - \frac{1}{(hc)^{3}} \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon f(\mu, \epsilon_{k+\frac{1}{2}}) \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} {\epsilon'}^{2} d\epsilon' } & \nonumber \\ & {\displaystyle \times \int_{-1}^{1} d\mu' [1 - f(\mu', \epsilon_{k'+\frac{1}{2}}) ] \int_{0}^{2\pi} d\beta' R_{\rm NNS}^{\rm out}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle = [ 1 - f(\mu, \epsilon_{k+\frac{1}{2}}) ] \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \int_{-1}^{1} d\mu' f(\mu', \epsilon_{k'+\frac{1}{2}}) \int_{0}^{2\pi} d\beta' } & \nonumber \\ & {\displaystyle \times \frac{1}{(hc)^{3}} \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} {\epsilon'}^{2} d\epsilon' R_{\rm NNS}^{\rm in}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle - f(\mu, \epsilon_{k+\frac{1}{2}}) \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \int_{-1}^{1} d\mu' [1 - f(\mu', \epsilon_{k'+\frac{1}{2}}) ] \int_{0}^{2\pi} d\beta' } & \nonumber \\ & {\displaystyle \times \frac{1}{(hc)^{3}} \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} {\epsilon'}^{2} d\epsilon' R_{\rm NNS}^{\rm out}(\epsilon, \epsilon', \cos\theta ). } & \label{eq:b17} \end{eqnarray} The first equality in Eq.~\eqref{eq:b17} stems from the fact that, once the distribution function is evaluated at the energy zone center and, consequently, its time derivative is evaluated there, the time derivative becomes a constant integrand and can be taken outside of the integral. Dividing both sides of Eq.~\eqref{eq:b17} by \begin{equation} \frac{1}{(hc)^3}\int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon = \frac{1}{(hc)^3} \epsilon_{k+1/2}^{3} \Delta \epsilon_{k+\frac{1}{2}}, \label{eq:b4e} \end{equation} we obtain \begin{eqnarray} & {\displaystyle \left. \pderiv{ f(\mu, \epsilon_{k+\frac{1}{2}}) }{ t } \right|_{\rm scat} = } & \nonumber \\ & {\displaystyle = [ 1 - f(\mu, \epsilon_{k+\frac{1}{2}}) ] \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \int_{-1}^{1} d\mu' f(\mu', \epsilon_{k'+\frac{1}{2}}) \int_{0}^{2\pi} d\beta' } & \nonumber \\ & {\displaystyle \times \frac{1}{ \epsilon_{k+\frac{1}{2}}^{3} \Delta \epsilon_{k+\frac{1}{2}} } \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} \epsilon'^{2} d\epsilon' R_{\rm NNS}^{\rm in}(\epsilon, \epsilon', \cos\theta ) } & \nonumber \\ & {\displaystyle - f(\mu, \epsilon_{k+\frac{1}{2}}) \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} \int_{-1}^{1} d\mu' [1 - f(\mu', \epsilon_{k'+\frac{1}{2}}) ] \int_{0}^{2\pi} d\beta' } & \nonumber \\ & {\displaystyle \times \frac{1}{ \epsilon_{k+\frac{1}{2}}^{3} \Delta \epsilon_{k+\frac{1}{2}} } \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} \epsilon'^{2} d\epsilon' R_{\rm NNS}^{\rm out}(\epsilon, \epsilon', \cos\theta ), } & \label{eq:b21} \end{eqnarray} which we rewrite as \begin{eqnarray} & {\displaystyle \left. \pderiv{ f(\mu, \epsilon_{k+\frac{1}{2}}) }{ t } \right|_{\rm scat} = [ 1 - f(\mu, \epsilon_{k+\frac{1}{2}}) ] \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } & \nonumber \\ & {\displaystyle \times \int_{-1}^{1} d\mu' f(\mu', \epsilon_{k'+\frac{1}{2}}) \int_{0}^{2\pi} d\beta' \langle R_{\rm NNS}^{\rm in}(\epsilon, \epsilon', \cos\theta ) \rangle_{E} } & \nonumber \\ & {\displaystyle - f(\mu, \epsilon_{k+\frac{1}{2}}) \frac{1}{(hc)^{3}} \sum_{k'=1}^{N} \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } & \nonumber \\ & {\displaystyle \times \int_{-1}^{1} d\mu' [1 - f(\mu', \epsilon_{k'+\frac{1}{2}}) ] \int_{0}^{2\pi} d\beta' \langle R_{\rm NNS}^{\rm out}(\epsilon, \epsilon', \cos\theta ) \rangle_{E}, } & \label{eq:b22} \end{eqnarray} where \begin{eqnarray} & {\displaystyle \langle R_{\rm NNS}^{\rm in/out}(\epsilon, \epsilon', \cos\theta ) \rangle_{E} } & \nonumber \\ & {\displaystyle \equiv \frac{1}{ \epsilon_{k+\frac{1}{2}}^{3} \Delta \epsilon_{k+\frac{1}{2}} } \int_{\epsilon_{k}}^{\epsilon_{k+1}} \epsilon^{3} d\epsilon \frac{1}{ \epsilon_{k'+\frac{1}{2}}^{2} \Delta \epsilon_{k'+\frac{1}{2}} } \int_{\epsilon_{k'}}^{\epsilon_{k'+1}} \epsilon'^{2} d\epsilon' R_{\rm NNS}^{\rm in/out}(\epsilon, \epsilon', \cos\theta ). } & \label{eq:b23} \end{eqnarray} With the scattering kernel defined as in Eq.~\eqref{eq:b23} in the collision term of the Boltzmann equation, the energy transfer between the neutrinos and the nucleons resulting from the many neutrino--nucleon scattering events is captured accurately, despite the fact that the energy exchange per scattering is much less than a typical energy zone width. \subsubsection{Axisymmetry} The first implementation of multi-angle, multi-frequency neutrino transport in the context of spatially two-dimensional, axisymmetric core-collapse supernova models was achieved by \citet{OtBuDe08}. Their implementation was based on the neutrino transport solver developed by \citet{LiBuWa04} for the neutrino specific intensity ($I$), whose evolution is given by the following equation: \begin{equation} \frac{D I}{Dt}+\Omega\cdot\nabla I + \sigma I = S. \label{eq:specificintensity} \end{equation} \noindent Here, $D/Dt$ is the Lagrangian time derivative, $\Omega$ is the unit vector in the direction of neutrino propagation, whose components are $(\cos\theta_{p},\sin\theta_{p}\cos\phi_{p},\sin\theta_{p}\sin\phi_{p})$, where $\theta_{p}$ and $\phi_{p}$ are spherical momentum-space coordinates defined relative to the outward radial direction, $\sigma$ is the total absorption cross section, including absorption and scattering, and $S$ is the total emissivity, including emission and scattering. Eq.~\ref{eq:specificintensity} is temporally discretized fully implicitly. The phase space discretization is handled as follows. Space---i.e., radius and angle---is discretized using a conservative difference scheme. Momentum space---i.e., the space comprising the two dimensions corresponding to the angles of the neutrino's direction of propagation, $\theta_{p}$ and $\phi_{p}$, and the dimension corresponding to the neutrino's energy, $\epsilon_{\nu}$, is discretized as follows. The discrete ordinates method is used for the momentum-space dimensions. Further details of the discretization of Eq.~\ref{eq:specificintensity} have not yet been provided. \subsubsection{Three spatial dimensions} The journey down what will no doubt be a long road toward the implementation of general relativistic, three-dimensional Boltzmann neutrino transport in the context of core-collapse supernovae was begun by \citet{SuYa12}. With core-collapse supernovae in mind, they began by solving the conservative form of the Boltzmann equation for three-dimensional, static stellar core configurations: \begin{eqnarray} \label{eqn:eqtransfin-spherical2a} \frac{1}{c}\frac{\partial f}{\partial t} + \frac{\mu}{r^{2}} \frac{\partial}{\partial r} (r^{2} f) + \frac{\sqrt{1-\mu^{2}}~{\rm cos}~\phi_{p}}{r {\rm sin}~\theta} \frac{\partial}{\partial \theta} ({\rm sin}~\theta f) + \frac{\sqrt{1-\mu^{2}}~{\rm sin}~\phi_{p}}{r {\rm sin}~\theta} \frac{\partial f}{\partial \phi} \nonumber \\ + \frac{1}{r} \frac{\partial}{\partial \mu} [(1-\mu^{2}) f] - \frac{\sqrt{1-\mu^{2}}}{r} \frac{{\rm cos}~\theta}{{\rm sin}~\theta} \frac{\partial}{\partial \phi_{p}} ({\rm sin}~\phi_{p} f) = \left[ \frac{1}{c} \frac{\delta f}{\delta t} \right]_{\rm collision}. \end{eqnarray} In light of the use of spherical polar coordinates, there are terms that correspond to advection in momentum space even in a static medium in flat spacetime---i.e., even in the absence of special and general relativistic effects. For example, as a neutrino propagates, its direction cosine, $\mu\equiv\cos\theta_{p}$, which is defined relative to the outwardly pointing radial basis vector, will necessarily change. This is described by the fourth term on the left-hand side of Eq.~\eqref{eqn:eqtransfin-spherical2a}. This is not a geometric effect, as spacetime is flat in this case. Rather, it is a coordinate effect. The last term on the left-hand side of the same equation has a similar origin and interpretation. Given the assumption of a static medium and flat spacetime, no other terms appear on the left-hand side, which would describe special and general relativistic effects were they considered. The discretization of Eq.~\eqref{eqn:eqtransfin-spherical2a} follows and extends that used in \citet{MeBr93b}---i.e., finite differencing in space and energy, and discrete ordinates in neutrino propagation angles. For the second term on the left-hand side of Eq.~\eqref{eqn:eqtransfin-spherical2a}, corresponding to radial advection of neutrinos, Sumiyoshi and Yamada use the following discretization: \begin{equation} \label{eqn:advection-radial} \left[ \frac{\mu}{r^{2}} \frac{\partial}{\partial r} (r^{2} f) \right] = \left[ \mu \frac{\partial}{\partial (r^3 / 3)} (r^{2} f) \right] = {\mu}_{j} ~ \frac{3}{r_{I}^{3} - r_{I-1}^{3}} ~ ( r_{I}^{2} ~ f_{I} - r_{I-1}^{2} ~ f_{I-1} ), \end{equation} where, in their notation, $f_{I-1}$ and $f_{I}$ are the neutrino distributions at the cell interfaces of the $i$-th zone. The quantities ${\mu}_{j} f_{I}$ at the cell boundaries are defined by \begin{equation} \label{eqn:fnu-radial} {\mu}_{j} f_{I} = \frac{ {\mu}_{j} - | {\mu}_{j} | }{2} \{ ( 1 - \beta_{I} ) f_{i} + \beta_{I} f_{i+1} \} + \frac{ {\mu}_{j} + | {\mu}_{j} | }{2} \{ \beta_{I} f_{i} + ( 1 - \beta_{I} ) f_{i+1} \}, \end{equation} and $\beta_{I}$ is \begin{equation} \label{eqn:beta-radial} \beta_{I} = 1 - \frac{1}{2} \frac{\alpha \Delta r_{I} / \lambda_{I}}{1 + \alpha \Delta r_{I} / \lambda_{I}}. \end{equation} In the diffusion (free-streaming) limit, $\beta_{I}=1/2 (1)$. The advection in $\mu = \cos \theta_{p}$ is discretized as \begin{equation} \label{eqn:advection-munu} \left[ \frac{1}{r} \frac{\partial}{\partial \mu} [~(1-\mu^{2}) f~] \right] = \frac{3}{2} ~ \frac{r_{I}^{2} - r_{I-1}^{2}}{r_{I}^{3} - r_{I-1}^{3}} ~ \frac{1}{{d \mu}_{j}} ~ \left[ (1-{\mu}^2)_{J} f_{J} - (1-{\mu}^2)_{J-1} f_{J-1} \right]. \end{equation} Upwind differencing is implemented, and $f_{J} = f_{j}$. $\theta$-advection is first reexpressed and then discretized as \begin{eqnarray} \label{eqn:advection-polar} \left[ \frac{\sqrt{1-\mu^{2}}~{\rm cos}~\phi_{p}}{r {\rm sin}~\theta} \frac{\partial}{\partial \theta} ({\rm sin}~\theta f) \right] = \left[ - \frac{\sqrt{1-\mu^{2}}~{\rm cos}~\phi_{p}}{r } \frac{\partial}{\partial \mu} [~(1-\mu^{2})^{\frac{1}{2}} f~] \right] \nonumber \\ = - \frac{3}{2} ~ \frac{r_{I_r}^{2} - r_{I_r-1}^{2}}{r_{I_r}^{3} - r_{I_r-1}^{3}} (1-{\mu}_{j_{\theta}}^{2})^{\frac{1}{2}} {\cos \phi_{p}}_{j_{\phi}}~ \frac{1}{d \mu_{i_{\theta}}} ~ \left[ (1-{\mu}^2)^{\frac{1}{2}}_{I_{\theta}} f_{I_{\theta}} - (1-{\mu}^2)^{\frac{1}{2}}_{I_{\theta}-1} f_{I_{\theta}-1} \right]. \nonumber \\ \end{eqnarray} The factor, $(1-{\mu}_{j_{\theta}}^{2})^{\frac{1}{2}} {\cos \phi_{p}}_{j_{\phi}}$, determines the direction of advection and the evaluation of $f_{I_{\theta}}$ at the cell interface. Given the sign of $\cos \phi_{p}$, $f_{I_{\theta}}$ is determined by \begin{eqnarray} \label{eqn:fnu-polar} {\cos \phi_{p}}_{j_{\phi}} f_{I_{\theta}} = \frac{ {\cos \phi_{p}}_{j_{\phi}} + | {\cos \phi_{p}}_{j_{\phi}} | }{2} \{ ( 1 - \beta_{I_{\theta}} ) f_{i_{\theta}} + \beta_{I_{\theta}} f_{i_{\theta}+1} \} \nonumber \\ + \frac{ {\cos \phi_{p}}_{j_{\phi}} - | {\cos \phi_{p}}_{j_{\phi}} | }{2} \{ \beta_{I_{\theta}} f_{i_{\theta}} + ( 1 - \beta_{I_{\theta}} ) f_{i_{\theta}+1} \}. \end{eqnarray} As before, $\beta_{I_{\theta}}$ takes on values between $\frac{1}{2}$ (diffusion limit) and $1$ (free-streaming limit) and is defined in the same way as $\beta_{I}$, using instead the angular zone widths and mean free paths. $\phi_{p}$ advection is discretized as \begin{eqnarray} \label{eqn:advection-phinu} \left[ - \frac{\sqrt{1-\mu^{2}}}{r} \frac{{\rm cos}~\theta}{{\rm sin}~\theta} \frac{\partial}{\partial \phi_{p}} ({\rm sin}~\phi_{p} f) \right] = \left[ - \frac{\sqrt{1-\mu^{2}}}{r} \frac{\mu}{\sqrt{1-\mu^{2}}} \frac{\partial}{\partial \phi_{p}} ({\rm sin}~\phi_{p} f) \right] = \nonumber \\ - \frac{3}{2} ~ \frac{r_{I_r}^{2} - r_{I_r-1}^{2}}{r_{I_r}^{3} - r_{I_r-1}^{3}} (1-{\mu}_{j_{\theta}}^{2})^{\frac{1}{2}} ~ \frac{\mu_{i_{\theta}}}{(1-{\mu_{i_{\theta}}^{2})^{\frac{1}{2}}}} ~ \frac{1}{{d \phi_{p}}_{j_{\phi}}} \left[ (\sin \phi_{p})_{J_{\phi}} f_{J_{\phi}} - (\sin \phi_{p})_{J_{\phi}-1} f_{J_{\phi}-1} \right]. \nonumber \\ \end{eqnarray} In this case, the sign of $\mu_{i_{\theta}} (\sin \phi_{p})_{J_{\phi}}$ determines the direction of advection. Upwind differencing is used to determine $f_{J_{\phi}}$ at the cell interface. $f_{J_{\phi}}$ is given by \begin{eqnarray} \label{eqn:fnu-phinu} \mu_{i_{\theta}} (\sin \phi_{p})_{J_{\phi}} f_{J_{\phi}} = \frac{ \mu_{i_{\theta}} (\sin \phi_{p})_{J_{\phi}} + | \mu_{i_{\theta}} (\sin \phi_{p})_{J_{\phi}} | }{2} f_{j_{\phi}+1} \nonumber \\ + \frac{ \mu_{i_{\theta}} (\sin \phi_{p})_{J_{\phi}} - | \mu_{i_{\theta}} (\sin \phi_{p})_{J_{\phi}} | }{2} f_{j_{\phi}}. \end{eqnarray} Last but not least, $\phi$ advection is discretized as follows \begin{eqnarray} \label{eqn:advection-azimuthal} \left[ \frac{\sqrt{1-\mu^{2}}~{\rm sin}~\phi_{p}}{r {\rm sin}~\theta} \frac{\partial f}{\partial \phi} \right] = \left[ \frac{\sqrt{1-\mu^{2}}~{\rm sin}~\phi_{p}}{r \sqrt{1-\mu^{2}}} \frac{\partial f}{\partial \phi} \right] \nonumber \\ = \frac{3}{2} ~ \frac{r_{I_r}^{2} - r_{I_r-1}^{2}}{r_{I_r}^{3} - r_{I_r-1}^{3}} (1-{\mu}_{j_{\theta}}^{2})^{\frac{1}{2}} ~ \frac{{\sin \phi_{p}}_{j_{\phi}}}{(1-{\mu_{i_{\theta}}^{2})^{\frac{1}{2}}}} ~ \frac{1}{{d \phi}_{i_{\phi}}} \left[ f_{I_{\phi}} - f_{I_{\phi}-1} \right]. \end{eqnarray} Given the sign of ${\sin\phi_{p}}_{j_{\phi}}$ and, therefore, the advection direction, $f_{I_{\phi}}$ is given by \begin{eqnarray} \label{eqn:fnu-azimuthal} {\sin \phi_{p}}_{j_{\phi}} f_{I_{\phi}} = \frac{ {\sin \phi_{p}}_{j_{\phi}} + | {\sin \phi_{p}}_{j_{\phi}} | }{2} \{ \beta_{I_{\phi}} f_{i_{\phi}} + (1 - \beta_{I_{\phi}}) f_{i_{\phi}+1}\} \nonumber \\ + \frac{ {\sin \phi_{p}}_{j_{\phi}} - | {\sin \phi_{p}}_{j_{\phi}} | }{2} \{ (1 - \beta_{I_{\phi}}) f_{i_{\phi}} + \beta_{I_{\phi}} f_{i_{\phi}+1}\}. \end{eqnarray} $\beta_{I_{\phi}}$ is determined in the same way as its counterparts in the radial and $\theta$ directions, using the appropriate angular zone widths and mean free paths. Focusing on the temporal discretization, the phase-space discretizations spelled out in Eqs.~\eqref{eqn:advection-radial} through \eqref{eqn:fnu-azimuthal} are assembled and evaluated in a fully implicit manner, as shown schematically below (i.e., the phase-space discretizations themselves are not inserted; each term is represented by its continuum counterpart): \begin{eqnarray} \label{eqn:boltzmann-implicit} \frac{1}{c}\frac{f_{i}^{n+1} - f_{i}^{n}}{\Delta t} + \left[ \frac{\mu}{r^{2}} \frac{\partial}{\partial r} (r^{2} f) \right]^{n+1} + \left[ \frac{\sqrt{1-\mu^{2}}~{\rm cos}~\phi_{p}}{r {\rm sin}~\theta} \frac{\partial}{\partial \theta} ({\rm sin}~\theta f) \right]^{n+1} \nonumber \\ + \left[ \frac{\sqrt{1-\mu^{2}}~{\rm sin}~\phi_{p}}{r {\rm sin}~\theta} \frac{\partial f}{\partial \phi} \right]^{n+1} + \left[ \frac{1}{r} \frac{\partial}{\partial \mu} [(1-\mu^{2}) f] \right]^{n+1} \nonumber \\ + \left[ - \frac{\sqrt{1-\mu^{2}}}{r} \frac{{\rm cos}~\theta}{{\rm sin}~\theta} \frac{\partial}{\partial \phi_{p}} ({\rm sin}~\phi_{p} f) \right]^{n+1} = \left[ \frac{1}{c} \frac{\delta f}{\delta t} \right]_{\rm collision}^{n+1}, \end{eqnarray} \noindent where $n$ designates the current time slice. The left-hand side of Eq.~\eqref{eqn:boltzmann-implicit} is linear in the distribution function, but the right-hand side is not. Consequently, as in the spherically symmetric case, both sides of Eq.~\eqref{eqn:boltzmann-implicit} are linearized in $f$. (In this case, Sumiyoshi and Yamada are working with a hydrostatic and thermally frozen stellar core profile. As a result, linearizations in $\epsilon$ and $Y_e$ are not necessary.) This gives rise to a linear system of equations for $\delta f_i$. To solve the combination of the outer nonlinear system of equations and the corresponding inner linear system of equations, Sumiyoshi and Yamada implement a Newton--Krylov approach---specifically, they implement Newton--BiCGSTAB, with point-Jacobi preconditioning. The extension of these lepton-number conservative methods to the special relativistic case was documented by \citet{NaSuYa14}. They deployed novel momentum-space gridding based on three considerations: (1) The invariant emissivity and opacity, which together define an invariant collision term on the right-hand side of the Boltzmann equation, can be computed in either the inertial, Eulerian frame or the inertial frame of an observer instantaneously comoving with the stellar core fluid. The value obtained in both cases would be the same if the neutrino angles and energies used in either case were related by the Lorentz transformation between the two inertial frames. (2) The Lorentz transformations of angles and energies between the Eulerian and comoving frames decouple---i.e., one is free to define one's energy grid in either of the two frames independently of one's angular grids, allowing the choices that would simplify the numerics while respecting the physics. (3) The dominant opacity during stellar core collapse stems from coherent, isoenergetic scattering on nuclei---i.e., any novel gridding should be constructed with this opacity in mind. In Nagakura et al.'s notation, the invariance of the collision term can be expressed as \begin{eqnarray} \label{eq:collisionrela} \varepsilon^{\rm{lb}}\Bigl( \frac{\delta f}{\delta t} \Bigr)_{{\rm col}}^{{\rm lb}} = \varepsilon^{\rm{fr}}\Bigl( \frac{\delta f}{\delta \tilde{t}} \Bigr)_{{\rm col}}^{\rm{fr}}. \end{eqnarray} where $t (\tilde{t})$ is the Eulerian (comoving) frame time and where the labels ${\rm lb (fr)}$ correspond to the Eulerian (comoving) frames. The equality in Eq.~\eqref{eq:collisionrela} is to be understood as follows: If one evaluates the left-hand side at a particular neutrino angle and energy as measured by the Eulerian observer, the equality is guaranteed provided the righ-hand side is evaluated at the corresponding Lorentz transformed neutrino angle and energy, which would be the angle and energy measured by the comoving observer. The neutrino energies in the two frames, $\varepsilon^{\rm{lb}}$ and $\varepsilon^{\rm{fr}}$, are related by \begin{equation} \varepsilon^{\rm{fr}} = \varepsilon^{{\rm lb}} \gamma (1 - \mathbf{n}^{\rm lb} \cdot \mathbf{v}) , \label{eqn:Lorentz-energy} \end{equation} where $\gamma$ is the Lorentz factor, $\mathbf{n}^{\rm lb}$ is the neutrino propagation direction as measured in the Eulerian frame, and $\mathbf{v}$ is the fluid velocity in the same frame. The unit neutrino propagation direction vectors in the two frames are related by \begin{eqnarray} \varepsilon^{\rm{fr}} \mathbf{n}^{\rm{fr}} = \varepsilon^{{\rm lb}} \left[\mathbf{n}^{\rm lb} + \left( - \gamma + (\gamma - 1) \frac{\mathbf{n}^{\rm lb} \cdot \mathbf{v}}{v^2}\right) \mathbf{v}\right], \label{eq:energytrans} \end{eqnarray} where $\mathbf{n}^{\rm{fr}}$ denotes the unit neutrino propagation direction vector in the comoving frame. \begin{figure}[htb] \includegraphics[width=\textwidth]{momspacegrids1} \caption{The left panel shows a schematic of uniform momentum-space angular and energy grids in the Laboratory frame. Constant-energy grid lines are represented by concentric circles \citep{NaSuYa14}. Constant angles are indicated by radial lines. The right panel shows the corresponding contours and lines in the comoving frame. Also added (dotted line) is a constant comoving-frame neutrino energy contour.} \label{fig:momspacegrids1} \end{figure} \begin{figure}[htb] \includegraphics[width=\textwidth]{momspacegrids2} \caption{The left panel shows the Lagrangian Remapping Grid (LRG) used by \citet{NaSuYa14} in their Boltzmann transport implementation. On the LRG, the neutrino angular grid is uniform, but the energy grid corresponds to a uniform energy grid in the comoving frame, shown in the right panel by concentric circles. The two energy grids are related by a Lorentz transformation. Given that the angular grid is uniform in the Laboratory frame, the corresponding angular grid in the comoving frame is not uniform. The angular grids, too, are related by a Lorentz transformation between the frames.} \label{fig:momspacegrids2} \end{figure} Figure \ref{fig:momspacegrids1} from Nagakura et~al.\ shows two momentum-space grids associated with momentum-space spherical coordinates. The grid on the left corresponds to a choice of uniform gridding in both angle and energy in the Eulerian frame. (Uniform gridding is typically not used for either, but for simplicity Nagakura et~al.\ consider this case to illustrate the essential features of their approach.) The grid on the right corresponds to the Lorentz-transformed Eulerian grid---i.e., the counterpart grid in the comoving frame. This grid is no longer uniform in either angle or energy. On the comoving-frame grid, an isoenergetic scattering event, wherein the neutrino's angle changes but its energy does not, would necessitate an interpolation in energy given the fact the energy grid is not uniform in angle. The number (typically $\sim$20) of energy ``groups'' used in most core-collapse supernova simulations is low, and to make matters worse, the groups are typically spaced exponentially, with coarser resolution at higher energies. Interpolation on such grids is problematic for these reasons and for the conservation of neutrino (lepton) number. To overcome these difficulties, Nagakura et~al.\ use the independence of the Lorentz transformation for neutrino angles and energy and choose a hybrid-grid approach. They introduce the Lagrangian Remap Grid (LRG) for the Eulerian observer, which is shown on the left-hand side of Fig.~\ref{fig:momspacegrids2}, which is the primary grid used in their work. On the LRG, the angular grid is uniform but the energy grid is not. The energy grid on the LRG is the Lorentz transform of the energy grid on the right-hand side of the same figure, which corresponds to the comoving-frame observer's energy grid, which is uniform. Of course, by virtue of the Lorentz transformation and the fact that the angular grid is uniform in the Eulerian frame, the angular grid in the comoving frame cannot be uniform. This presents no difficulties in their approach, so Nagakura et~al.\ opt for the simplicity of the uniform angular grid on the LRG, their primary grid. The evaluation of the collision term on the LRG, which is how the collision term is evaluated in Nagakura et al.'s approach to the discretization and solution of the Boltzmann equation, is the same as its evaluation on the comoving-frame grid, given the invariance of the collision term for such Lorentz-transform-related grids. Since the latter energy grid is uniform across angles, no interpolation in energy is required in evaluating, for example, isoenergetic scattering. The Lorentz transformation between the two grids is spatially and temporally dependent, so the LRG must be continually redefined as the evolution proceeds, but the comoving-frame grid does not change. As the LRG evolves, a conservative remapping procedure is used to remap the neutrino distributions on the previous LRG to the new LRG. With all of the above in mind, and focusing on isoenergetic scattering, the right-hand side of the Boltzmann equation, Eq.~\eqref{eqn:eqtransfin-spherical2a}, is evaluated on the LRG as \begin{eqnarray} & & \left(\frac{\delta f}{\delta t}\right)_{\rm collision} \\ \nonumber & & = \gamma \left( 1-\mathbf{n}^{\rm lb}\cdot \mathbf{v}\right)\left(\frac{\delta f}{\delta\tilde{t}}\right)_{\rm collision} \\ \nonumber & & = \gamma \left( 1-\mathbf{n}^{\rm lb}\cdot \mathbf{v}\right) \left[ \frac{-(\epsilon^{\rm fr})^2}{(2\pi)^3} \int d{{\Omega}^{'}}^{\rm fr} R^{\rm fr}(\Omega^{\rm fr},{\Omega^{'}}^{\rm fr}) [f^{\rm fr}(\epsilon^{\rm fr},\Omega^{\rm fr})-f^{\rm fr}(\epsilon^{\rm fr},{\Omega^{'}}^{\rm fr})]\right] \\ \nonumber & & = \gamma \left( 1-\mathbf{n}^{\rm lb}\cdot \mathbf{v}\right) [ \frac{-[\epsilon^{\rm fr}(\epsilon^{\rm lb})]^2}{(2\pi)^3}\int d{{\Omega}^{'}}^{\rm lb}\frac{d{{\Omega}^{'}}^{\rm fr}}{d{{\Omega}^{'}}^{\rm lb}} R^{\rm lb}[\Omega^{\rm fr}(\Omega^{\rm lb}),{\Omega^{'}}^{\rm fr}({\Omega^{'}}^{\rm lb})] \\ \nonumber & & \times \{f^{\rm lb}[\epsilon^{\rm fr}(\epsilon^{\rm lb}),\Omega^{\rm fr}(\Omega^{\rm lb})]-f^{\rm lb}[\epsilon^{\rm fr}(\epsilon^{\rm lb}),{\Omega^{'}}^{\rm fr}({\Omega^{'}}^{\rm lb})]\} ] \\ \nonumber & & = \gamma \left( 1-\mathbf{n}^{\rm lb}\cdot \mathbf{v}\right) [ \frac{-[\epsilon^{\rm fr}(\epsilon^{\rm lb})]^2}{(2\pi)^3}\int d{{\Omega}^{'}}^{\rm lb}\frac{d{{\Omega}^{'}}^{\rm fr}}{d{{\Omega}^{'}}^{\rm lb}} R^{\rm lb}[\Omega^{\rm fr}(\Omega^{\rm lb}),{\Omega^{'}}^{\rm fr}({\Omega^{'}}^{\rm lb})] \\ \nonumber & & \times \{ f^{\rm lb}(\epsilon^{\rm lb},\Omega^{\rm lb})-f^{\rm lb}(\epsilon^{\rm lb},{\Omega^{'}}^{\rm lb}) \} ] .\\ \nonumber \end{eqnarray} The last equality follows from the invariance of the distribution function. While the use of the LRG simplifies the evaluation of the collision term and avoids the need to introduce velocity-dependent angle and energy advection on the left-hand side of the Boltzmann equation, there is a cost: It complicates spatial advection. To overcome this inherited difficulty, Nagakura et~al.\ invoke yet another grid, the Laboratory Fixed Grid (LFG). The LFG is like the grid depicted on the left-hand side of Fig.~\ref{fig:momspacegrids1}. It is the same for all Eulerian observers at different spatial locations and is constant in time. And, in Nagakura et al.'s implementation, it is of higher resolution in energy relative to the LRG. This is evident in Fig.~\ref{fig:LRGLFG}. Given the LFG, the treatment of spatial and angular advection occurs in the following steps: (1) Using the subgrid energy distribution, $f_{\rm subgrid}$, the values of the distribution function, $f$, at the zone centers of the LFG grid are determined by $f_{\rm subgrid}(\epsilon_{\rm LFG_{A^{'},B^{'},...}})$, where $\epsilon_{\rm LFG_{A^{'},B^{'},...}}$ are the value of the energies corresponding to the zone centers on the LFG grid for zones A$^{'}$, B$^{'}$, ..., respectively. (For the example points selected here, the LFG energies are the same.) (2) Once the values of the distribution function are defined at the zone centers of the LFG, they can be used to define the spatial and angular fluxes on the LRG as follows. Consider Fig.~\ref{fig:LRGLFG}. On the left-hand side of the figure, the LRG is shown. On the right, the LFG is overlaid on the LRG. Note, too, here we are considering advection in space and angle, denoted on the vertical axis by $y$ to represent both. Let us consider LFG zones A$^{'}$ and B$^{'}$. The flux at the interface between these two zones is determined from the value of the distribution function there, which is determined by interpolating between the values of the distribution function at the A$^{'}$ and B$^{'}$ zone centers, as outlined by \citet{SuYa12}. (When invoking the LFG, this interpolation involves only two zones, not three as it would in the case of the LRG.) \begin{figure}[htb] \includegraphics[width=\textwidth]{LRGLFG} \caption{In the left panel, energy zones on the LRG are shown for adjacent radial or angular grid points, designated here by $y$ \citep{NaSuYa14}. In the right panel, the higher-resolution Laboratory Fixed Grid (LFG) is shown, superimposed on the LRG.} \label{fig:LRGLFG} \end{figure} (3) Given the fluxes on the LFG, we are ready to define the fluxes that will be used on the LRG to update the distribution function in each of the LRG's zones due to advection. Note that advection into (for example) LFG zone B$^{'}$ from A$^{'}$ involves advection into a single zone. However, it is easy to see from Fig.~\ref{fig:LRGLFG} that advection from A$^{'}$ into B$^{'}$ involves advection into two zones of the LRG: A and B. To divide the contribution of the LFG flux into B$^{'}$ into LRG fluxes into zones A and B, we split the flux as follows: \begin{equation} F_{A^{'}|B^{'}}=\gamma F_{A^{'}|B^{'}} + (1-\gamma)F_{A^{'}|B^{'}}, \end{equation} \label{eq:apportioningoff} where $F_{A^{'}|B^{'}}$ is the LFG flux at the interface between LFG zones A$^{'}$ and B$^{'}$ and where \begin{eqnarray} \label{eq:NLNR} \gamma & = &\frac{N_L}{N_L+N_R}, \\ N_L & = & |\epsilon^{3}_{AB}-\epsilon^{3}_{L}|f_A, \\ N_R & = & |\epsilon^{3}_{AB}-\epsilon^{3}_{R}|f_B, \end{eqnarray} with $\epsilon_{AB}$ corresponding to the value of the energy at the interface of the LRG zones A and B and where $\epsilon_{L(R)}$ corresponds to the energy value associated with the left (right) boundary of the LFG zone B$^{'}$. $f_{A(B)}$ corresponds to the value of the distribution function on the LRG in zone A(B). In other words, the LFG flux at the interface of LFG zones A$^{'}$ and B$^{'}$ is split, according to the distribution-weighted energy volume, between LRG fluxes into zones A and B. Note that zone B, for example, has multiple LFG fluxes advecting into it. The total LRG flux for zone B would therefore be the sum of all of the relevant LFG fluxes into it determined in the manner described here. (4) Once the LRG interface fluxes are defined as in step 3, the spatial (or angular) advection on the LRG is carried out as outlined by \citet{SuYa12}. Nagakura et al.'s novel method has been designed to conserve lepton number. A demonstration that it simultaneously conserves energy at an appropriate level remains to be demonstrated. With regard to the temporal discretization with special relativistic effects included, Nagakura et~al.\ use a semi-implicit method. This is necessitated by the fact that the methods outlined above for the treatment of advection on the LRG cannot be made fully implicit. With the temporal descretization alone in mind, the Boltzmann equation can be written as \begin{eqnarray} \frac{f^{n+1} - f^{n}}{\Delta t} = - F_{\rm{adv}}(f^{gs},f^{n+1}) + \Bigl( \frac{\delta f}{\delta t} \Bigr)_{{\rm col}}^{{\rm lb}}(f^{n+1}), \label{eq:conBoltzrewrite_fullimp} \end{eqnarray} where \begin{eqnarray} &&F_{\rm{adv}}(f^{gs},f^{n+1}) = F^{SR}_{\rm{adv}}(f^{\rm{gs}}) + \kappa \Bigl( F^{\rm{NR}}_{\rm{adv}}(f^{n+1}) - F^{\rm{NR}}_{\rm{adv}}(f^{\rm{gs}}) \Bigr). \label{eq:stabilizationB} \end{eqnarray} The first term on the right-hand side of Eq.~\eqref{eq:stabilizationB} is the advection term for the special relativistic case. It is evaluated explicitly at the value of the current iterate, $f^{gs}$. The second two terms correspond to what the advection terms would be in the non-relativistic case, evaluated both implicitly and explicitly (at the current iterate), respectively. Together they represent a ``correction'' to the first term and are introduced for numerical stability. When $f^{gs}\rightarrow f^{n+1}$, the second two terms cancel, and the right-hand side of Eq.~\eqref{eq:stabilizationB} becomes $F^{SR}_{adv}(f^{n+1})$, as desired. The parameter, $\kappa$, is a limiter and prevents the correction from becoming larger than the first term, which Nagakura et~al.\ note can happen when the fluid velocities become several tens of percent of the speed of light. Given the solution of the distribution function and, in particular, the numerical determination of the collision term, the update to the matter electron fraction and stress--energy tensor (including both energy and momentum exchange) are computed as follows [see Eqs.~\eqref{eq:fluidFourMomentumConservation}, \eqref{eq:ElectronNumberConservation}, \eqref{eq:electronfractionequationsourceterm}, and \eqref{eq:fourmomentumequationsourceterm}]: \begin{eqnarray} T^{\mu\nu}_{\hspace{3.5mm} ,\nu} &=& - G^{\mu}, \label{eq:TandGfinal} \\ N_{e \hspace{0.5mm} ,\nu}^{\nu} &=& - \Gamma, \label{eq:NandGammafinal} \end{eqnarray} where \begin{eqnarray} G^{\mu} &\equiv& \sum_{\rm{i}} G_{\rm{i}}^{\mu}, \label{eq:Gsumdef} \\ G_{\rm{i}}^{\mu} &\equiv& \int p_{\rm{i}}^{\mu} \Bigl( \frac{\delta f}{\delta \tau} \Bigr)_{\rm{col}(\rm{i})} dV_p, \label{eq:Gdef} \\ \Gamma &\equiv& \Gamma_{\nu_{e}} - \Gamma_{\bar{\nu_{e}}}, \label{eq:Gammasumdef} \\ \Gamma_{i} &\equiv& \int \Bigl( \frac{\delta f}{\delta \tau} \Bigr)_{\rm{col}(\rm{i})} dV_p, \label{eq:Gammadef} \end{eqnarray} and where, for Nagakura et al., $N_{e}^{\nu}$ (our $J_{e}^{\nu}$) is the electron density current, $dV_p$ (our $\pi_m$) is the invariant momentum-space volume element, and $i$ indicates the neutrino species. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{Krtheta} \caption{In the top left panel, \citet{NaIwFu18} plot the $r-\theta$ component of the Eddington tensor, $k^{r\theta}$, at 190 ms after bounce in a core-collapse supernova simulation they performed with their Boltzmann neutrino transport solver, initiated from a progenitor of $11.2\,M_\odot$. In the corresponding upper right panel, they plot the (absolute) difference between $k^{r\theta}$ computed with both Boltzmann neutrino transport and two-moment neutrino transport with M1 closure. In both cases, $k^{r\theta}$ is evaluated at the mean neutrino energy at each point of the spatial grid shown here. Nagakura et~al.\ classify such absolute differences in the off-diagonal components of the Eddington tensor in their model as substantial, indicating that Boltzmann transport is needed to accurately compute the components of the neutrino Eddington tensor. In their model, $k^{r\theta}$ was demonstrated to dictate the evolution of the lateral neutrino fluxes, not $k^{\theta\theta}$, in the critical semitransparent regime.} \end{figure} \label{fig:krtheta} \subsection{Boltzmann kinetics: spatial discontinuous Galerkin discretization plus spectral multigroup $P_{N}$} A numerical treatment of Boltzmann kinetics that implements a finite-element discretization---specifically, a Discontinuous Galerkin (DG) discretization---for the spatial degrees of freedom together with a spectral decomposition in momentum space was developed by \citet{RaAbRe13} for the Boltzmann equation: \begin{equation}\label{eq:relativistic.boltzmann} p^\mu \frac{\partial F}{\partial x^{\mu}} = \mathbb{C}[F]\,. \end{equation} In this scheme, the distribution function, $F$, is first decomposed in momentum space as \begin{equation}\label{eq:solution.ansatz} F(x^\alpha, \nu, \varphi, \theta) = \sum_{n=0}^{N_\nu} \sum_{\ell = 0}^{N} \sum_{m = -\ell}^\ell F^{n\ell m}(x^\alpha)\, \chi_{n}(\nu)\, Y_{\ell m}(\varphi,\theta)\,, \end{equation} where the orthonormal basis functions in the energy dimension are defined by \begin{align}\label{eq:energy.basis} \chi_n(\nu) &= \begin{cases} {1}/{\sqrt{V_n}}, & \textrm{if } \nu \in [\nu_n, \nu_{n+1}]\,, \\ 0, & \textrm{otherwise}\,, \end{cases}\,,& V_n &= \int_{\nu_n}^{\nu_{n+1}} h^3 \nu^2\, \mathrm{d} \nu = \frac{h^3}{3} (\nu_{n+1}^3 - \nu_n^3)\,. \end{align} Using the orthonormality of the spherical harmonics and $\chi_{n}(\nu)$, the coefficients in the momentum-space expansion of the distribution function, Eq.~\eqref{eq:solution.ansatz}, are given by \begin{equation} F_{n \ell m}(x^\alpha) = \int_{0}^{\infty} h^3 \nu^2\, \mathrm{d} \nu \int_{\mathcal{S}_1} \mathrm{d} \Omega\, F(x^\alpha, \nu, \varphi, \theta)\, Y_{\ell m}(\varphi, \theta) \, \chi_{n}(\nu)\,. \end{equation} Radice et~al.\ introduce the shorthand notation: \begin{equation} \Psi_A(\nu, \varphi, \theta) \equiv \chi_{n}(\nu)\, Y_{\ell m}(\varphi,\theta)\,, \end{equation} and reexpress Eq.~\eqref{eq:solution.ansatz} as \begin{equation}\label{eq:solution.ansatz.short} F(x^\alpha, \epsilon, \varphi, \theta) = \sum_A F^A(x^\alpha) \Psi_A(\epsilon, \varphi, \theta) = F^A \Psi_A\,. \end{equation} Inserting the expansion (\ref{eq:solution.ansatz.short}) into the Boltzmann equation \eqref{eq:relativistic.boltzmann} leads to a coupled system of equations for the expansion coefficients that must be solved to determine them as a function of time and space: \begin{equation}\label{eq:scheme.derivation.step1} p^0 \frac{\partial F^B}{\partial t} \Psi_B + p^k \frac{\partial F^B}{\partial x^k} \Psi_B = \mathbb{C}[F]\,. \end{equation} Multiplying Eq.~\eqref{eq:scheme.derivation.step1} by $\Psi^A$ (in the notation of Radice et al., a superscript $A$ indicates a complex conjugate), integrating over momentum space, and using the orthonormality of the basis functions $\Psi_A$ gives \begin{equation}\label{eq:charon.scheme} \frac{\partial F^A}{\partial t} + {{\mathcal{P}}^{kA}}_{B} \frac{\partial F^B}{\partial x^k} = \mathbb{S}^A[F]\,, \end{equation} where \begin{equation}\label{eq:charon.stiff} {{\mathcal{P}}^{kA}}_{B} \equiv \int p^k\, \Psi^A\, \Psi_B\, \mathrm{d} \Pi\, \end{equation} and \begin{equation}\label{eq:charon.source} \mathbb{S}^A[F] \equiv \int \mathbb{C}[F]\, \Psi^A\, \mathrm{d} \Pi\,. \end{equation} In Eqs~\eqref{eq:charon.stiff} and \eqref{eq:charon.source}, $\mathrm{d} \Pi$ is the invariant momentum-space volume element. Once the expansion coefficients are obtained by solving Eq.~\eqref{eq:charon.scheme}, the solution to the original Boltzmann equation is given by Eq.~\eqref{eq:solution.ansatz}. Radice et~al.\ illustrate their approach to solving Eq.~\eqref{eq:charon.scheme} by considering the one-dimensional, collisionless case: \begin{equation} \frac{\partial F^A}{\partial t} + {{\mathcal{P}}^{1A}}_{B} \frac{\partial F^B}{\partial x} = 0\,. \label{eq:charon.scheme1D} \end{equation} In a DG discretization in $x$, the distribution function is written as an expansion in Lagrange polynomials: \begin{equation} F^{A}(x,t)=\sum_{A}{F^A}_{i}(t)u(x), \label{eq:1DDGexpansion} \end{equation} where \begin{equation} u(x) = u_{i-1/2} l_{i-1/2}(x) + u_{i+3/2} l_{i+3/2}(x)\,, \end{equation} and where the Lagrange polynomials are defined by \begin{align} l_{i-1/2}(x) &= 1 - \frac{x - x_{i-1/2}}{x_{i+3/2} - x_{i-1/2}}\,,& l_{i+3/2}(x) &= \frac{x - x_{i-1/2}}{x_{i+3/2} - x_{i-1/2}}\,. \end{align} Insertion of the expansion \eqref{eq:1DDGexpansion} in Eq.~\eqref{eq:charon.scheme1D} yields the following set of coupled ordinary differential equations for the coefficients ${F^A}_{i}$: \begin{equation} \Delta x \frac{\mathrm{d} {F^A}_i}{\mathrm{d} t} = {{\mathbb{F}}^A}_i\,, \label{eq:ODE} \end{equation} where the flux factors are given by \begin{align} \label{eq:dg_fluxes} {{\mathbb{F}}^A}_i &\equiv \frac{3}{2} \mathcal{F}^- - \overline{\mathcal{F}} - \frac{1}{2} \mathcal{F}^+ \,,& {{\mathbb{F}}^A}_{i+1} &\equiv \frac{1}{2}\mathcal{F}^- + \overline{\mathcal{F}} - \frac{3}{2} \mathcal{F}^+\,, \end{align} \[ \overline{\mathcal{F}} \equiv \frac{1}{2} \Big[ \big({{\mathcal{P}}^{1A}}_{B}\big)_i {F^B}_i + \big({{\mathcal{P}}^{1A}}_{B}\big)_{i+1} {F^B}_{i+1}\Big]\,, \] \[ \mathcal{F}^- \equiv \frac{1}{2} \bigg[ {{\mathcal{P}}^{1A}}_{B} \big({F^B}_L + {F^B}_R\big) - {{\mathcal{R}}^{1A}}_{C} \mathrm{max}(v, |{\Lambda^{1C}}_{D}|) {{\mathcal{L}}^{1D}}_{B} \big({F^B}_R - {F^B}_L\big) \bigg]\,, \] \[ {{\mathcal{P}}^{1A}}_{B} = {{\mathcal{R}}^{1A}}_{C} {\Lambda^{1C}}_{D} {{\mathcal{L}}^{1D}}_{B}\,. \] In Eq.~\eqref{eq:dg_fluxes}, $\overline{\mathcal{F}}$ is the average flux; $\mathcal{F}^-$ is the flux computed at the boundary $x_{i+1/2}$ of the $i^{\rm th}$ element through an exact solution of the Riemann problem with left and right states, ${F^B}_{L}$ and ${F^B}_{R}$, respectively; $\mathcal{F}^+$ is defined in the same way, at the boundary $x_{i+3/2}$ ; ${{\mathcal{R}}^{1A}}_{C}$ is the matrix of right eigenvectors of ${{\mathcal{P}}^{1A}}_{B}$; ${{\mathcal{L}}^{1D}}_{B}$ is the matrix of left eigenvectors of ${{\mathcal{P}}^{1A}}_{B}$; ${\Lambda^{1C}}_{D}$ is the matrix of eigenvalues of ${{\mathcal{P}}^{1A}}_{B}$; and $v$ is a parameter taken to be the first abscissa of the adopted Legendre quadrature (this parameter is introduced by Radice et~al.\ to dissipate numerically zero-speed modes). The three-dimensional extension of the scheme is given by constructing the flux factors in each of the three dimensions in the same way, which gives \begin{equation}\label{eq:ldg.full} \frac{\mathrm{d} {F^A}_{i,j,k}}{\mathrm{d} t} = \mathbb{S}^A[F] + \frac{1}{\Delta x} {{\mathbb{F}}^A}_{i,j,k} + \frac{1}{\Delta y} {{\mathbb{G}}^A}_{i,j,k} + \frac{1}{\Delta z} {{\mathbb{H}}^A}_{i,j,k} \,. \end{equation} Now, focusing on the temporal discretization of Eq.~\eqref{eq:ldg.full} and using Radice et al.'s rewrite of the equation as \begin{equation}\label{eq:ldg.short} \frac{\mathrm{d} F^A}{\mathrm{d} t} = \mathbb{S}^A[F] + \mathcal{A}^A[F]\,, \end{equation} the authors evolve the coefficients of the distribution function's DG--spectral expansion, Eqs.~\eqref{eq:solution.ansatz} and \eqref{eq:1DDGexpansion}, in a two-step, semi-implicit, asymptotic-preserving scheme \citep{McEvLo08}, staged as a predictor step, \begin{equation} \label{eq:predictorstep} \frac{{F^A}_{k+1/2} - {F^A}_{k}}{\Delta t/2} = {\mathcal{A}}^{A}[F_k] + \mathbb{S}^A[F_{k+1/2}]\,\,, \end{equation} followed by a corrector step, \begin{equation} \label{eq:correctorstep} \frac{{F^A}_{k+1} - {F^A}_{k}}{\Delta t} = {\mathcal{A}}^{A}[F_{k+1/2}] + \mathbb{S}^A[F_{k+1}]\,. \end{equation} Given that Radice et~al.\ choose to use a partially spectral scheme, like all others deploying such schemes they had to contend with the Gibbs phenomenon. To do so, they were informed by the seminal work of \citet{McHa10}, who developed a method, using filtering, to mitigate Gibbs phenomena in $P_{N}$ schemes. Unfortunately, as pointed out by McLerran and Hauck and by Radice et al., the filtered $P_{N}$ scheme does not have a unique continuum limit---i.e., it cannot be shown to be a discretization of a system of partial differential equations. In Radice et al.'s approach, the spherical harmonic expansion of the solution is filtered at each time step using a spherical-spline filter: \begin{equation} \label{eq:filter_expansion} \big[\mathscr{F}(F)\big](\varphi, \theta) = \sum_{\ell=0}^N \sum_{m = -\ell}^\ell \bigg[\sigma\Big(\frac{\ell}{N+1}\Big)\bigg]^s F^{\ell m} Y_{\ell m}(\varphi, \theta)\,, \end{equation} where $\sigma(\eta)$ is a filter function of order $p$ such that \begin{align} \sigma(0) &= 1\,, & \sigma^{(k)}(0) = 0\,, \ \textrm{for } k = 1,2, \ldots p-1\,, \end{align} and where $s$ is a strength parameter, which is chosen to be a function of the time step: \begin{equation} s=\beta\Delta t, \end{equation} where $\beta$ is a parameter. Radice et~al.\ document success using a modified, second-order Lanczos filter: \begin{equation} \sigma(\eta)=\frac{\sin\eta}{\eta}. \label{eq:Lanczosfilter} \end{equation} With the introduction of filtering, the time stepping algorithm, Eqs.~\eqref{eq:predictorstep} and \eqref{eq:correctorstep}, is modified as follows: \begin{align} \label{eq:filter.a} \frac{{F^A}_{*} - {F^A}_k}{\Delta t/2} & = {\mathcal{A}^A}[F_k] + {\mathbb{S}^A}[F_{k+1/2}], \\ \label{eq:filter.1} {F^A}_{k+1/2} & = {\mathscr{F}^A}_{B} {F^B}_{*}, \\ \label{eq:filter.b} \frac{{F^A}_{**} - {F^A}_{k}}{\Delta t} &= {\mathcal{A}^A}[F_{k+1/2}] + {\mathbb{S}^A}[F_{k+1}], \\ \label{eq:filter.2} {F^A}_{k+1} &= {\mathscr{F}^A}_{B} {F^B}_{**}, \end{align} where ${\mathscr{F}^A}_{B}$ is a diagonal matrix that instantiates the filtering operation. Moreover, Radice et~al.\ were able to show that their filtering method represents the first-order, operator-split discretization of a term added to the underlying system of partial differential equations, Eq.~\eqref{eq:charon.scheme}: \begin{equation}\label{eq:filtered.pn} \frac{\partial F^A}{\partial t} + {{\mathcal{P}}^{kA}}_{B} \frac{\partial F^B}{\partial x^k} = e^A + {S^A}_{B} F^B + \beta {L^A}_{B} F^B\,, \end{equation} where ${L^A}_{B}$ is a diagonal matrix with coefficients $\log\sigma(l/(N+1))$. That is, their filtering method is equivalent to the addition of a forward-scattering term [$\sigma(0)=1$] to Eq.~\eqref{eq:charon.scheme}, and their overall method is a unique discretization of an underlying system of coupled partial differential equations, Eq.~\eqref{eq:filtered.pn}. While the filtering effectively mitigates the Gibbs phenomenon, the distribution function can still become negative, which is unphysical. To contend with negative distribution functions in the context of the filtered $P_{N}$ scheme, \citet{LaHa19} developed and analyzed so-called positivity limiters, which can be used to ensure positivity of distribution function in each step of a time integration scheme. \subsection{Boltzmann kinetics: spectral decomposition across phase space} \citet{PePeNo14} opt for a fully spectral approach to the solution of the 3+1 general relativistic Boltzmann equation in the CFC approximation in non-conservative form: \begin{equation} \label{eq:boltzefcfc} \frac{1}{\alpha}\frac{\partial f}{\partial t} + \left( \frac{p^i}{\Psi^2 \epsilon} - \frac{\beta^i}{\alpha} \right) \frac{\partial f}{\partial x^i} - \bar{\Gamma}^j\,\!_{\mu \nu} p^\mu p^\nu J^i\,\!_j \frac{1}{\epsilon}\frac{\partial f}{\partial p^i} = \frac{1}{\epsilon}\mathcal{C}[f]. \end{equation} In this case, the 3+1 line element is \begin{equation} \label{eq:lineelem} ds^2 = -\alpha^2 dt^2 + \gamma_{\tilde{i}\tilde{j}}(dx^{\tilde{i}} + \beta^{\tilde{i}} dt)(dx^{\tilde{j}} + \beta^{\tilde{j}} dt), \end{equation} where the spatial geometry is assumed to be conformally flat---i.e., \begin{equation} \label{eq:defcfc} \gamma_{\tilde{i}\tilde{j}} = \Psi^4 f_{\tilde{i}\tilde{j}}. \end{equation} In Eq.~\eqref{eq:defcfc}, $f_{\tilde{i}\tilde{j}}$ is the flat metric and $\Psi$ is the conformal factor, \begin{equation} \label{e:def_Psi} \Psi = \left( \frac{\det \gamma_{\tilde{i}\tilde{j}}}{\det f_{\tilde{i}\tilde{j}}} \right)^{1/12}. \end{equation} In Eq.~\ref{eq:boltzefcfc}, $p^{\mu}$ and $\epsilon$ correspond to the neutrino four-momenta and energy, respectively, measured by an Eulerian observer. $\bar{\Gamma}^j\,\!_{\mu \nu}$ are the Ricci rotation coefficients. Peres et al.'s choice of phase-space coordinates is motivated by the known challenge time derivatives present for spectral methods. Specifically, were comoving-frame four-momenta chosen instead, the coefficients of the advection terms on the left-hand side of Eq.~\ref{eq:boltzefcfc} would contain time derivatives associated with, for example, relativistic Doppler shift. Of course, the collision term is best evaluated in the comoving frame, using comoving-frame four momenta, so the choice of Eulerian frame four-momenta necessitates additional work to treat collisions. Peres et~al.\ leave the detailed treatment of this term to future publication. They also acknowledge the benefits of beginning instead with the conservative form of Eq.~\eqref{eq:boltzefcfc} and leave that to future publication, as well. In their approach, the distribution function is written as an expansion in terms of the basis functions across all six dimensions of phase space---in this case, spherical coordinates in both space and momentum space: \begin{equation} \label{eq:6_variables} f \left(t, r, \theta, \phi, \epsilon, \Theta, \Phi \right) \simeq \sum_{i=0}^{n_r} \sum_{j=0}^{n_\theta} \sum_{k=0}^{n_\phi} \sum_{l=0}^{n_\epsilon} \sum_{m=0}^{n_\Theta} \sum_{p=0}^{n_\Phi} C_{ijklmp}(t)\, T_i (\bar{r})\, F_j(\theta)\, F_k(\phi)\, T_l( \bar{\epsilon} )\, T_m( \bar{\Theta} )\, F_p(\Phi). \end{equation} Chebyshev basis functions are used for $r$, $\epsilon$, and $\Theta$---i.e., for the expected non-periodic nature of the distribution function in these dimensions. Fourier basis functions are used for $\theta$, $\phi$, and $\Phi$---i.e., for the expected periodic nature of the distribution function in these dimensions. Barred variables in Eq.~\eqref{eq:6_variables} are in the range $[-1,1]$ and are related to the standard coordinates by affine transformations. In the case of the radial coordinate, the affine transformation is written explicitly as \begin{equation} \label{eq:map_af} r = \alpha_r \bar{r} + \beta_r, \qquad \bar{r} \in [-1,1], \end{equation} where $\alpha_r$ and $\beta_r$ are constants, with $R_{\rm min} = \beta_r - \alpha_r$ and $R_{\rm max} = \alpha_r + \beta_r$. $R_{\rm min}$ and $R_{\rm max}$ are the minimum and maximum radii of the spherical shell considered in the Peres et~al.\ analysis, respectively. (The extension of their method to $r=0$ is left for future development.) Ignoring the collision term in Eq.~\eqref{eq:boltzefcfc}, it can be written in terms of the Liouville operator, $\tilde{L}[f]$, as \begin{equation} \label{eq:time_pde} \frac{\partial f}{\partial t} = - \tilde{L}[f]. \end{equation} Substituting the expansion (\ref{eq:6_variables}) into Eq.~\eqref{eq:time_pde} results in a system of coupled ordinary differential equations for the solution vector, $U_N(t)$, where $N=n_r\times n_\theta \times n_\phi \times n_\epsilon \times n_\Theta \times n_\Phi$. The elements of the solution vector are the coefficients $C_{ijklmp}(t)$. Under this substitution, the operator, $\tilde{L}[f]$, in Eq.~\eqref{eq:time_pde} becomes an $N\times N$ matrix. To solve this system of equations, Peres et~al.\ employ an explicit, third-order, Adams--Bashforth scheme, \begin{equation} \label{e:ODE_integration} U_N^{n+1} = U_N^n - \Delta t \left( \frac{23}{12} \tilde{L}_N U_N^n - \frac{4}{3} \tilde{L}_N U_N^{n-1} + \frac{5}{12} \tilde{L}_N U_N^{n-2} \right), \end{equation} though they emphasize they are not restricted to explicit updates but could also deploy semi-implicit and implicit methods. \subsection{Boltzmann kinetics: Monte Carlo methods} \label{sec:MC} Up to now, we have focused on deterministic methods for the solution of the Boltzmann neutrino transport equations in core-collapse supernovae. But nondeterministic---specifically Monte Carlo---methods have also been used. Until recently, they have been confined to ``snapshot'' studies in a particular slice of an evolving stellar core and have been used most extensively as a gauge of the accuracy of deterministic, but approximate, methods. Although it has yet to be used in the context of a core-collapse supernova simulation as the method of choice for treating time-dependent neutrino transport, a lepton-number and energy conserving Monte Carlo scheme for such transport has been developed by \citet{AbBuOt12} for the $O(v/c)$ limit of special relativistic effects and Newtonian gravity. In their paper, Abdikamalov et~al.\ illustrate their method assuming spherical symmetry. They begin with the equation for the neutrino intensity for each neutrino species, here written generically without a species label: \begin{eqnarray} \label{eq:te} \frac{1}{c}\frac{\partial I (r,\mu,\varepsilon,t)}{\partial t} + \frac{\partial I(r,\mu,\varepsilon,t)}{\partial r} + \frac{1-\mu^2}{r} \frac{\partial I (r,\mu,\varepsilon,t)}{\partial \mu} \nonumber\\\nonumber\\ = \kappa_a(\varepsilon,T) \left[ B(\varepsilon,T) - I (x,\mu,\varepsilon,t)\right] - \kappa_s(\varepsilon,T) I (r,\mu,\varepsilon,t) \nonumber\\\nonumber\\ + 2 \pi \int_{-1}^{+1} \int_0^\infty \varkappa_s(\varepsilon',\mu' \to \varepsilon,\ \mu) I(x,\mu',\varepsilon',t) d\mu' d\varepsilon' \, . \end{eqnarray} The first term term on the right-hand side is the familiar term for emission and absorption of neutrinos. $\kappa_a(s)$ is the total absorption (scattering) opacity. The last term describes the additional source of neutrino as a result of inscattering into the neutrino ``beam'' with direction $\mu$ and energy $\epsilon$. Eq.~\eqref{eq:te} is solved using the boundary conditions: \begin{equation} \label{eq:tebcR} I (R,\mu,\varepsilon,0) = I_R (\mu,\varepsilon,t)\, , \quad -1\le\mu\le 0 \, . \end{equation} In their set of evolution equations, Eq.~\eqref{eq:te} is coupled to the material energy equation and the equation for the evolution of the electron fraction: \begin{eqnarray} \label{eq:u0} \rho \frac{d U_m} {d t} = 2 \pi \sum_i \int_{-1}^1 \int_0^\infty \kappa_{ai} (I_i - B_i) \, d\mu d\varepsilon \nonumber\\\nonumber\\ + \sum_i S_i \, , \\\nonumber\\ \label{eq:Ye0} \rho N_A \frac{d Y_e} {d t} = 2\pi \sum_i s_i \int_{-1}^1 \int_0^\infty \frac{\kappa_{ai}}{\varepsilon} (I_i - B_i) \, d\mu d\varepsilon \, . \end{eqnarray} The sum over $i$ is over neutrino species, which will be dropped in what follows. $s_i=+1, -1, 0$ for electron neutrinos, electron antineutrinos, and heavy-flavor neutrinos, respectively, and will be carried through the remaining presentation of the method. In Eq.~\eqref{eq:u0}, $S$ is the contribution to the material energy from energy-exchanging scattering with neutrinos and is given by \begin{eqnarray} \label{eq:s} S = (2\pi)^2 \int_0^\infty \!\!\!\! \int_0^\infty \!\!\! \int_{-1}^1 \int_{-1}^1 \!\! \bigg[\frac{\varepsilon}{\varepsilon'} \varkappa_s(\varepsilon', \mu' \!\!\to \varepsilon,\mu) I (x,\mu',\varepsilon',t) \nonumber\\\nonumber\\ - \varkappa_s(\varepsilon, \mu \to \varepsilon', \mu') I (x,\mu,\varepsilon,t) \bigg] d\varepsilon d\varepsilon' d\mu d\mu' \, . \end{eqnarray} Abdikamalov et~al.\ introduce the additional quantities: \begin{eqnarray} \label{eq:u_r} {U_r} & = & \frac{4\pi}{c} \int_0^\infty B d \varepsilon \, , \\ \label{eq:b} b & = & \frac{B}{4\pi \int_0^\infty B d\varepsilon} \, , \\ \label{eq:sigma_p} \kappa_p & = & \frac{\int_0^\infty \kappa_a B d\varepsilon}{\int_0^\infty B d\varepsilon} \, , \\ \label{eq:xi_a} \chi_a & = & \frac{\kappa_{a}}{\varepsilon}\, , \\ \label{eq:xi_p} \chi_p & = & \frac{\int_0^\infty\chi_a B d\varepsilon}{\int_0^\infty B d\varepsilon} \, , \end{eqnarray} where $U_r$ is the equilibrium neutrino energy density and $\kappa_p$ is the Planck-mean opacity. The evolution equation for $U_r$ is related to the evolution equations for $U_m$ and $Y_e$ by \begin{equation} \label{eq:dBidt2} \frac{d {U_r}}{d t} = \beta \left(\rho \frac{d U_m}{dt} \right) + \zeta \left( \rho N_A \frac{dY_e}{dt} \right)\, , \end{equation} where \begin{equation} \label{eq:alphabeta} \beta = \frac{1}{\rho C_V} \left(\frac{\partial {U_r}}{\partial T}\right)_{\rho,Ye} \, \end{equation} and \begin{equation} \label{eq:zeta} \zeta = \frac{1}{\rho N_A} \left[\left(\frac{\partial {U_r}}{\partial Y_e} \right)_{\rho,T} -\frac{1}{C_V} \left(\frac{\partial U_m}{\partial Y_e} \right)_{\rho,T} \left(\frac{\partial {U_r}}{\partial T} \right)_{\rho,Ye} \right] \, . \end{equation} In Eqs.~\eqref{eq:dBidt2} through \eqref{eq:zeta}, $N_A$ is Avogadro's Number and $C_V$ is the material heat capacity. As with deterministic methods, the first step in the solution of Eq.~\eqref{eq:te}, \eqref{eq:u0}, and \eqref{eq:Ye0} is to linearize them. As part of this linearization procedure, Abdikamalov also ensure that these three evolution equations become decoupled. The first step in the linearization process involves approximating $\{\kappa_a, \kappa_p,\kappa_s, \varkappa_s, b, \chi_a, \chi_p, \beta, \zeta\}$ with $\{\tilde\kappa_a,\tilde\kappa_p, \tilde\kappa_s, \tilde\varkappa_s, \tilde b, \tilde\chi_a, \tilde\chi_p, \tilde\beta, \tilde\zeta\}$. Abdikamalov define the latter as the time-centered values of the former within the time interval $t_n\le t\le t_{n+1}$. In practice, they are chosen at the initial time step: $t_n$. Given this linearization, Eqs.~\eqref{eq:te}, \eqref{eq:u0}, \eqref{eq:Ye0}, and \eqref{eq:dBidt2} become: \begin{eqnarray} \label{eq:rt2} \frac{1}{c} \frac{\partial I(\mu,\varepsilon)} {\partial t} + \mu \frac{\partial I(\mu,\varepsilon)}{\partial r} + \frac{1-\mu^2}{r} \frac{\partial I(\mu,\varepsilon)}{\partial \mu} \nonumber\\\nonumber\\ = c \tilde \kappa_{a} \tilde b {U_r} - (\tilde \kappa_a+\tilde \kappa_s) I(\mu,\varepsilon) \nonumber\\\nonumber\\ + 2 \pi \int_{-1}^{+1} \int_0^\infty \tilde \varkappa_s(\varepsilon',\mu' \to \varepsilon,\ \mu) I(\mu',\varepsilon') d\mu' d\varepsilon' \, , \end{eqnarray} \begin{eqnarray} \label{eq:u2} && \rho \frac{d U_m} {d t} = 2\pi \int_{-1}^1 \int_0^\infty \tilde \kappa_{a} I d\mu d\varepsilon - c \tilde \kappa_{p} {U_r} + S\, , \\\nonumber\\ \label{eq:Ye2} && \rho N_A \frac{d Y_e} {d t} = 2\pi s_i \int_{-1}^1 \int_0^\infty \tilde{\chi}_{a} I \, d\mu d \varepsilon - c s_i \tilde{\chi}_{p} {U_r} \, , \end{eqnarray} \begin{equation} \label{eq:dBidt5} \frac{d {U_r}}{d t} = 2\pi \int_{-1}^1 \int_0^\infty \tilde \gamma I d \, \mu d \varepsilon - c \tilde \gamma_{p} {U_r} + \tilde\beta S \, , \end{equation} where \begin{eqnarray} \tilde \gamma &=& \tilde \beta \tilde \kappa_{a} + \tilde \zeta s_i \tilde{\chi}_{a} \, , \\\nonumber\\ \tilde \gamma_{p} &=& \tilde \beta \tilde \kappa_{p} + \tilde \zeta s_i \tilde{\chi}_{p} \, . \end{eqnarray} It is understood in Eqs.~\eqref{eq:u2} and \eqref{eq:dBidt5} that $\varkappa_s$ is replaced by $\tilde\varkappa_s$. Abdikamalov et~al.\ then time average Eq.~\eqref{eq:dBidt5} and use \begin{equation} \label{eq:bbar} \bar{U}_r = \alpha U_{r,n+1} + (1 - \alpha) U_{r,n}^* \, , \end{equation} where, as pointed out by Abdikamalov et al., $\alpha$ controls the degree to which the method is implicit, and where \begin{equation} U_{r,n}^* = U_{r,n}+\tilde\beta\Delta t_n\bar S \end{equation} and \begin{equation} \label{eq:s_bar} \bar S = \frac{1}{\Delta t_n} \int_{t_n}^{t_{n+1}} S(t) dt , \end{equation} to obtain \begin{equation} \label{eq:dBidt10} \bar{U}_r = f_{n} U_{r,n}^* + 2\pi\frac{1-f_{n}}{c \tilde \gamma_{p}} \int_{-1}^1 \int_0^\infty \tilde \gamma \bar{I} \, d\mu d\varepsilon \, , \end{equation} where \begin{equation} U_{r,n}^* = U_{r,n}+\tilde\beta\Delta t_n\bar S \end{equation} and \begin{equation} \label{eq:ff_nu} f_{n} = \frac{1}{1 + \alpha c \Delta t_n \tilde \gamma_{p}} . \end{equation} Abdikamalov et~al.\ now assume that $\bar{U}=U_r(t)$ and $\bar{I}=I(t)$ in Eq.~\eqref{eq:dBidt10} and use the resultant equation to substitute for $U_r$ in Eq.~\eqref{eq:rt2}, to obtain their final equation for the evolution of the neutrino intensity: \begin{eqnarray} \label{eq:rt5} \frac{1}{c} \frac{\partial I} {\partial t} + \mu \frac{\partial I}{\partial r} + \frac{1-\mu^2}{r} \frac{\partial I}{\partial \mu} = \tilde \kappa_{ea} c \tilde b U_{r,n}^* \nonumber\\\nonumber\\ - \tilde \kappa_{ea} I + \tilde \kappa_{es,e} I + \tilde \kappa_{es,l} I + \tilde \kappa_s I \nonumber\\\nonumber\\+ 2\pi \frac{\tilde \kappa_a \tilde b}{\tilde \kappa_p} \int_{-1}^1 \int_0^\infty \tilde \kappa_{es,e} I \, d\mu d \varepsilon + 2\pi \frac{\tilde \kappa_a \tilde b}{\tilde \chi_p} \int_{-1}^1 \int_0^\infty \tilde \chi_{es,l} I \, d\mu d \varepsilon \nonumber\\\nonumber\\ + 2 \pi \int_{-1}^{+1} \int_0^\infty \tilde \varkappa_s(\varepsilon',\mu' \to \varepsilon,\ \mu) I(\mu',\varepsilon') d\mu' d\varepsilon' \, , \end{eqnarray} where \begin{eqnarray} \label{eq:kappa_ea} \kappa_{ea} &=& f_n \kappa_a \, , \\ \label{eq:kapppa_ese} \kappa_{es,e} &=& (1-f_n) \frac{\tilde \beta \tilde \kappa_p}{\tilde \gamma_p} \kappa_a \, , \\ \label{eq:kapppa_esl} \kappa_{es,l} &=& (1-f_n) \frac{\tilde \zeta s_i \tilde \chi_p}{\tilde \gamma_p} \kappa_a \, , \\ \label{eq:chi_ese} \chi_{es,e} &=& (1-f_n) \frac{\tilde \beta \tilde \kappa_p}{\tilde \gamma_p} \chi_a \, , \\ \label{eq:chi_esl} \chi_{es,l} &=& (1-f_n) \frac{\tilde \zeta s_i \tilde \chi_p}{\tilde \gamma_p} \chi_a \, . \end{eqnarray} A similar procedure can be used to derive equations for the updates of $U_m$ and $Y_e$, as was performed for $U_r$. Abdikamalov et~al.\ point out that care must be taken to use the same expression for $U_r$---specifically, Eq.~\eqref{eq:dBidt10} with $\bar{U}_r=U_r(t)$ and $\bar{I}=I(t)$---in the derivation of the equation for $U_m$ in order to guarantee conservation of energy, to arrive at \begin{eqnarray} \label{eq:u5} U_{m,n+1} = U_{m,n} + \frac{\Delta t_n}{\rho} \bigg\{ 2\pi\int_{-1}^1 \int_0^\infty \tilde \kappa_{ea} \bar I \, d \mu d \varepsilon - \nonumber\\\nonumber\\ c f_{n} \tilde \kappa_{p} U_{r,n} + 2\pi \int_{-1}^1 \int_0^\infty \tilde \kappa_{es,l} \bar I \, d \mu d \varepsilon \nonumber\\\nonumber\\ - 2\pi \frac{\tilde \kappa_p}{\tilde \chi_p} \int_{-1}^1 \int_0^\infty \tilde \chi_{es,l} \bar I \, d \mu d \varepsilon + \bar S \bigg\} \, \end{eqnarray} and \begin{eqnarray} \label{eq:Ye3} Y_{e, n+1} = Y_{e, n} + \frac{\Delta t_n}{\rho N_A} \bigg\{ 2\pi s_i \int_{-1}^1 \int_0^\infty \tilde \chi_{ea} \bar I \, d \mu d \varepsilon - \nonumber\\\nonumber\\ c s_i f_{n} \tilde \chi_{p} U_{r,n} + 2\pi s_i \int_{-1}^1 \int_0^\infty \tilde \chi_{es,e} \bar I \, d \mu d \varepsilon \nonumber\\\nonumber\\ - 2\pi s_i \frac{\tilde \chi_p}{\tilde \kappa_p} \int_{-1}^1 \int_0^\infty \tilde \kappa_{es,e} \bar I \, d \mu d \varepsilon \bigg\}\, . \end{eqnarray} Having linearized and decoupled the equations of motion, the evolution in Abdikamalov et al.'s Monte Carlo approach proceeds as follows: The weight associated with each Monte Carlo paricle (MCP) is the number of particles associated with it and is assumed to be $N_0$. The number of particles emitted by the matter in the time interval $[t_n,t_n+1]$ is \begin{equation} {\cal N}_T = 8\pi^2\int_{t_n}^{t_{n+1}}\int_0^R\int_0^\infty \frac{\kappa_a(\varepsilon, T) B(\varepsilon, T)}{\varepsilon} r^2 dt dr d\varepsilon \, . \end{equation} Then, the number of MCP's emitted in this time interval is \begin{equation} N_T = \mathrm{RInt} \left( {\cal N}_T / N_0 \right) \, , \end{equation} where $\mathrm{RInt}(x)$ returns the largest integer less than $x$. The particle energy in each MCP is chosen according to the functional form of $\kappa B$. Since thermal emission is isotropic, the angle of propagation of each MCP emitted, $\mu$, is chosen uniformly on the interval $[-1,+1]$ using \begin{equation} \mu = 2\xi-1 \, , \label{eq:randomangledetermination} \end{equation} where $\xi$ is a random number that takes on values in the interval $[0,1]$. Similarly, the emission time is chosen uniformly on the interval $[t_n,t_{n+1}]$ using \begin{equation} t = t_n + (t_{n+1} - t_n) \xi \, . \end{equation} To choose the zone in which an MCP is emitted, Abdikamalov et~al.\ use the probability that the MCP is emitted in a particular zone, which is given by the total number of particles emitted in that particular zone divided by the total number of particles emitted across all zones. Once an MCP is emitted in a particular zone, its location (assuming spherical symmetry) within that zone is determined using \begin{equation} r=\left[r_{j-1/2}^3 + \left(r_{j+1/2} - r_{j-1/2} \right)^3\xi \right]^{1/3} \, . \end{equation} where $j$ is the zone index. The number of MCPs entering from the outer boundary of the domain, at radius $R$, during the interval $[t_n,t_{n+1}]$ is given by \begin{equation} N_B = \mathrm{RInt} \left[ - \frac{8\pi^2 R^2}{N_0} \int_{t_n}^{t_{n+1}}\int_0^\infty \int_{-1}^0 \frac{\mu I_R(\mu,\varepsilon,t)}{\varepsilon} dt d\varepsilon d\mu \right] \, . \end{equation} The number of MCPs present at the beginning of the interval is \begin{equation} N_{IC} = \mathrm{RInt} \left[ \frac{8\pi^2}{cN_0} \int_0^R \int_{-1}^1 \int_0^\infty I_i (r, \mu, \varepsilon) r^2 dr d\mu d\varepsilon \right] \, , \end{equation} where the spatial zone, propagation angle, and energy of each MCP is chosen randomly using the functional form of $I$. During transport, an emitted MCP will either (1) travel within the zone without collision and remain in the zone, (2) encounter a collision within the zone, or (3) exit the zone. These three possibilities correspond to three different distances, given by \begin{equation} \label{eq:d_b} d_b=\left\{ \begin{array}{ll} \left|\left[r_{j+1/2}^2-r^2(1-\mu^2)\right]^{1/2} - r \mu\right|, & \mathrm{if} \ j=1 \ \mathrm{or} \\ & \mu>0 \, , \ \sin\theta \ge \frac{R_{j-1/2}}{r} \, \\ & \\ & \\ \left|\left[r_{j-1/2}^2-r^2(1-\mu^2)\right]^{1/2} + r \mu\right|, & \mathrm{if} \ \mu < 0 \, , \ \sin\theta < \frac{R_{j-1/2}}{r} \, , \\ & \\ \end{array} \right. \end{equation} \begin{equation} \label{eq:d_t} d_t = c (t_{n+1} - t) \, , \end{equation} and \begin{equation} \label{eq:d_c} d_c = - \frac{\ln \xi}{\kappa_a + \kappa_s} \, . \end{equation} In Eqs.~\eqref{eq:d_b}, \eqref{eq:d_t}, and \eqref{eq:d_c}, $d_b$, $d_t$, and $d_c$ are the distance to the boundary of the zone, the distance the particle can travel in the time interval if it does not encounter a collision, and the distance between collisions, respectively. Once these distances are known, the MCP is moved to the location corresponding to the smallest of the three distances, and to the associated time, according to \begin{eqnarray} \label{eq:xupdate} r &\to& \sqrt{r^2 - 2 r d \mu + d^2} \, , \\ \nonumber\\ \label{eq:tupdate} t &\to& t + d / c \, . \end{eqnarray} If $d=d_c$, the MCP is either absorbed or scattered. To determine which, Abdikamalov et~al.\ use the following probabilities corresponding to the absorption and scattering coefficients appearing in Eq.~\eqref{eq:rt5}, the equation governing the MCP transport: \begin{eqnarray} P_{ea} & = & \kappa_{ea}/(\kappa_e+\kappa_s) , \\ P_s & = & \kappa_s/(\kappa_e+\kappa_s) , \\ P_{es,e} & = & \kappa_{es,e}/(\kappa_e+\kappa_s) , \\ P_{es,l} & = & \kappa_{es,l}/(\kappa_e+\kappa_s) . \end{eqnarray} The sum of all of these probabilities is, of course, equal to 1. As a result, to determine which of the above interactions takes place, Abdikamalov et~al.\ sample a random number $\xi$ in the range $[0,1]$. Based on the value of $\xi$: (1) if $\xi < P_{ea}$, the MCP undergoes effective absorption, (2) if $P_{ea} < \xi < P_{ea} + P_s$, the MCP is scattered, (3) if $P_{ea} + P_s < \xi < P_{ea} + P_s + P_{es,e}$, the MCP undergoes effective scattering in which its total energy is conserved, and (4) if $\xi > P_{ea} + P_s + P_{es,e}$, the MCP undergoes effective scattering in which its total lepton number is conserved. Within the domain $[0,1]$, the subdomain corresponding to each of the above possibilities is proportional to the probability for each possibility to occur, which ensures that the selection procedure yields a statistically correct result. And their result does not depend on the order in which they consider the possibilities. If the MCP is absorbed, its energy and lepton number are deposited in the zone and it is removed from the population of MCPs. If the MCP undergoes real scattering, it is moved to the location where the scattering occurs. For iso-energetic scattering, its angle is determined randomly using Eq.~\eqref{eq:randomangledetermination}. If its energy changes as well, its new energy is determined by randomly sampling the functional form of the scattering kernel in energy. If the MCP undergoes effective scattering, which is isotropic, the MCP's angle is again determined randomly using Eq.~\eqref{eq:randomangledetermination} and its energy is determined by randomly sampling the local emissivity spectrum since effective scattering mimics absorption and reemission. If $d=d_b$ and the boundary is the zone boundary, the transport sampling process begins again, using the values of the opacities in the new zone. If the boundary is the outer boundary, the MCP is removed from the population of MCPs. Finally, if $d=d_t$, the MCP is stored for the next time step. The above procedure is conducted for all of the MCP's in the computational domain (i.e., in all zones) at the beginning of a time step. For the case of a non-static medium, the comoving and Eulerian frames are no longer coincident and an extension of the Monte Carlo procedure outlined above is necessary. Abdikamalov et~al.\ extend their approach as follows: The emissivities and opacities are naturally computed in the comoving frame. Once calculated, the number of MCPs emitted in this frame in each cell is determined. Assuming spherical symmetry for simplicity, the location, $r_0$, direction of propagation, $\mu_0$, and energy, $\epsilon_0$ of each MCP emitted at $t_0$ is sampled based on the comoving frame emissivities. Each of these quantities is then transformed to the Eulerian frame using the well-known transformations (reproduced here for the spherically symmetric case): \begin{equation} \label{eq:e_lorentz} \varepsilon_0 = \gamma \varepsilon \left(1 - \frac{V_r \mu}{c}\right) \, , \end{equation} \begin{equation} \label{eq:mu_lorentz} \mu_0 = \frac{\mu-V_r/c}{1-\mu V_r/c} \, , \end{equation} \begin{equation} \varphi_0 = \varphi \, , \end{equation} \begin{equation} \label{eq:kappa_lorentz} \kappa(\mu, \varepsilon) = \frac{\varepsilon_0}{\varepsilon} \kappa_0 (\varepsilon_0) \ , \end{equation} \begin{equation} r = \gamma_j \left[r_0 + V_{r,j} (t_0-t_n)\right] \, , \end{equation} \begin{equation} t = \gamma_j \left(t_0-t_n+ \frac{V_{r,j} r_0}{c^2}\right) \, . \end{equation} The index $j$ in the last two equations is the index of the comoving-frame cell in which the MCP is emitted. (Of course, $V_{r,j}$ is measured in the Eulerian frame.) Once these transformations are made, the MCP is transported in the Eulerian frame, as described in the static case. Note, however, the distance to collision must be determined using the Eulerian-frame values of the opacities. Most of the steps in the static case proceed in the same way, with the exception of scattering, which requires additional care. If the MCP scatters, Abdikamalov et~al.\ transform the angle of propagation and the energy of the MCP into the comoving frame, determine a new comoving-frame angle and energy due to the scattering event, then transform this new set of momentum-space variables back into the Eulerian frame before the transport of the MCP proceeds. The amount of energy and momentum exchanged between the MCP and the matter during the scattering, determined in the comoving frame, is recorded. One further addition to the method presented by Abdikamalov et~al.\ that should be noted is the computational efficiency they gain by coupling their method to a Discrete Diffusion Monte Carlo (DDMC) method, first developed by \citet{Densmore2007} for photon transport and extended by Abdikamalov to neutrino transport. The latter method is used in diffusive regimes, where the original Monte Carlo method is plagued by the short distances between collisions: MCP paths between collisions become very short and the number of such paths that have to be simulated becomes prohibitively large. However, even with the coupling to DDMC, the Monte Carlo approach described here remains expensive and awaits future computing architectures that are more capable and well-suited to such an approach in order to be used for core-collapse supernova simulations. \subsection{Two-moment kinetics} \label{sec:numericalTwoMomentKinetics} Numerical methods for solving equations for two-moment kinetics in core-collapse supernovae have now been developed by multiple groups \citep{MuJaDi10,OCon15,JuObJa15,KuTaKo16,RoOtHa16,SkDoBu19}. There are as many variations in approach as there are groups. Here we focus on common features and highlight specific solutions. For example, some authors have adopted fully relativistic descriptions \citep[e.g.,][]{MuJaDi10,OCon15,KuTaKo16,RoOtHa16}, while others have resorted to approximations that seek to capture relativistic effects \citep[e.g.,][]{JuObJa15,SkDoBu19}. Current methods for solving the equations for neutrino-radiation hydrodynamics using the two-moment approach employ finite-volume or finite-difference type methods. To this end, the system of equations can be written in the compact form (cf.\ Eqs.~\eqref{eq:BaryonMassConservation3p1}-\eqref{eq:fluidMomentumEquation3p1}, and Eqs.~\eqref{eq:spectralEulerianEnergyEquation_3p1} and \eqref{eq:spectralEulerianMomentumEquation_3p1}) \begin{align} \pd{}{t}\mathbf{U} + \pd{\mathbf{F}^{i}(\mathbf{U})}{i} + \pd{}{\varepsilon}\big(\,\varepsilon\,\mathbf{F}^{\varepsilon}(\mathbf{U})\,\big) =\mathbf{S}(\mathbf{U}) + \mathbf{C}(\mathbf{U}) , \label{eq:fluidTwoMomentSystem} \end{align} where the vector of evolved quantities is given by \begin{equation} \mathbf{U} =\sqrt{\gamma}\,\big(\,D,\,S_{j},\,\tau,\,D\,Y_{e},\,\varepsilon^{2}\mathcal{E}_{1},\,\varepsilon^{2}\mathcal{F}_{1,j},\ldots,\,\varepsilon^{2}\mathcal{E}_{N_{\mbox{\tiny{\sc Sp}}}},\,\varepsilon^{2}\mathcal{F}_{N_{\mbox{\tiny{\sc Sp}}},j}\,\big)^{T}. \label{eq:fluidTwoMomentState} \end{equation} The spatial flux vectors $\mathbf{F}^{i}$, energy-space flux vector $\mathbf{F}^{\varepsilon}$ (zero for fluid variables), ``geometry'' sources $\mathbf{S}$, and the ``collision'' source due to neutrino--matter interactions $\mathbf{C}$ can be inferred from equations given in Sections~\ref{sec:hydrodynamics3p1}, \ref{sec:MomentKineticsAndClosure}, and \ref{sec:neutrinoInteractions}. Here, as an example, we consider the Eulerian two-moment model described in Sect.~\ref{sec:TwoMoment} with $N_{\mbox{\tiny{\sc Sp}}}$ neutrino species. Note that for each neutrino species, each radiation moment is represented by $N_{\varepsilon}$ degrees of freedom to represent the energy distribution of neutrinos, giving a total of $4\times N_{\varepsilon}\timesN_{\mbox{\tiny{\sc Sp}}}$ radiation degrees of freedom (compared to $6$ fluid degrees of freedom) per point in spacetime. In core-collapse supernova models, $N_{\varepsilon}=\mathcal{O}(20)$, while $N_{\mbox{\tiny{\sc Sp}}}=3-6$, resulting in $240-480$ degrees of freedom per spacetime point. Among the approaches to solve the system of equations given by Eq.~\eqref{eq:fluidTwoMomentSystem} numerically, high-resolution shock-capturing (HRSC) methods (e.g., finite-volume or finite-difference), initially developed for compressible hydrodynamics with shocks, have attracted much attention recently. (For simplicity of presentation, we proceed to discuss the case of one spatial dimension.) In the HRSC approach, the spacetime is discretized into spacelike foliations of spacetime with discrete time coordinates $\{\,t^{n}\,\}_{n=0}^{N_{t}}$, where the time step $\Delta t=t^{n+1}-t^{n}$ is the separation between foliations. On each foliation, spatial positions are assigned coordinates $\{\,x_{j-\f{1}{2}}\,\}_{j=1}^{N_{x}+1}$, separating $N_{x}$ ``cells'' with width $\Delta x_{j}=(x_{j+\f{1}{2}}-x_{j-\f{1}{2}})$. In addition, for radiation quantities, momentum (energy) space is discretized into $N_{\varepsilon}$ ``energy bins'' with edges $\{\,\varepsilon_{i-\f{1}{2}}\,\}_{i=1}^{N_{\varepsilon}+1}$ and bin widths $\Delta \varepsilon_{i}=(\varepsilon_{i+\f{1}{2}}-\varepsilon_{i-\f{1}{2}})$. Integration of Eq.~\eqref{eq:fluidTwoMomentSystem} over the phase-space cell $I_{ij}=I_{i}^{\varepsilon}\times I_{j}^{\varepsilon}$, where $I_{i}^{\varepsilon}=(\varepsilon_{i-\f{1}{2}},\varepsilon_{i+\f{1}{2}})$ and $I_{j}^{x}=(x_{j-\f{1}{2}},x_{j+\f{1}{2}})$, gives the semi-discretized system \begin{equation} \deriv{\mathbf{U}_{ij}}{t} =-\f{1}{\Delta V_{ij}}\big(\,\mathbf{F}_{ij+\f{1}{2}}^{x}-\mathbf{F}_{ij-\f{1}{2}}^{x}\,\big) -\f{1}{\Delta V_{ij}}\big(\,\varepsilon_{i+\f{1}{2}}\mathbf{F}_{i+\f{1}{2}j}^{\varepsilon}-\varepsilon_{i-\f{1}{2}}\mathbf{F}_{i-\f{1}{2}j}^{\varepsilon}\,\big) +\mathbf{S}_{ij}+\mathbf{C}_{ij}, \label{eq:fluidTwoMomentSystemSemiDiscrete} \end{equation} where the evolved quantities are the cell averages defined as \begin{equation} \mathbf{U}_{ij}(t) = \f{1}{\Delta V_{ij}}\int_{I_{ij}}\mathbf{U}(\varepsilon,x,t)\,d\varepsilon\,dx \quad\text{and}\quad \Delta V_{ij} = \int_{I_{ij}}\sqrt{\gamma}\,\varepsilon^{2}d\varepsilon dx, \label{eq:cellAverage} \end{equation} with $\mathbf{S}_{ij}$ and $\mathbf{C}_{ij}$ defined analogously, and the fluxes defined as \begin{align} \mathbf{F}_{ij\pm\f{1}{2}}^{x}(t) &= \int_{I_{i}^{\varepsilon}}\mathbf{F}^{x}(\varepsilon,x_{j\pm\f{1}{2}},t)\,d\varepsilon, \label{eq:fluxSpace} \\ \mathbf{F}_{i\pm\f{1}{2},j}^{\varepsilon}(t) &= \int_{I_{j}^{x}}\mathbf{F}^{\varepsilon}(\varepsilon_{i\pm\f{1}{2}},x,t)\,dx. \label{eq:fluxEnergy} \end{align} In Eq.~\eqref{eq:fluidTwoMomentSystemSemiDiscrete}, the temporal dimension has been left continuous (semi-discrete). Moreover, the equation is still exact. Approximations enter with the specification of the fluxes in Eqs.~\eqref{eq:fluxSpace} and \eqref{eq:fluxEnergy}, and the integrals to evaluate the sources $\mathbf{S}_{ij}$ and $\mathbf{C}_{ij}$. These approximations ultimately result in phase-space discretization errors. With these specifications, the approximate system in Eq.~\eqref{eq:fluidTwoMomentSystemSemiDiscrete} can be viewed as a system of ordinary differential equations (ODEs), which can be integrated forward in time with an ODE solver, which introduces temporal discretization errors. This discretization approach is called the method of lines (MOL). \subsubsection{Spatial discretization} The spatial fluxes in Eq.~\eqref{eq:fluxSpace} can be approximated with an appropriate numerical flux: \begin{equation} \mathbf{F}_{ij+\f{1}{2}}^{x}(t) \approx\Delta \varepsilon_{i}\,\widehat{\mathbf{F}^{x}}\big(\mathbf{U}(\varepsilon_{i},x_{j+\f{1}{2}}^{-},t),\mathbf{U}(\varepsilon_{i},x_{j+\f{1}{2}}^{+},t)\big), \label{eq:numericalFluxSpace} \end{equation} where $\mathbf{U}(\varepsilon_{i},x_{j+\f{1}{2}}^{\pm},t)$ is an approximation of $\mathbf{U}$ to the immediate left and right of the cell interface located at $x_{j+\f{1}{2}}$ ($x_{j+\f{1}{2}}^{\pm}=\lim_{\delta\to0^{+}}x_{j+\f{1}{2}}\pm\delta$). (In Eq.~\eqref{eq:numericalFluxSpace}, the midpoint rule is used to approximate the integral, but a more accurate quadrature rule can be used if desired.) Two things must be defined when computing the interface fluxes: (1) the procedure to \emph{reconstruct} the ``left'' and ``right'' states, and (2) the numerical flux function $\widehat{\mathbf{F}^{x}}$. The reconstruction step for radiation variables is essentially identical to that used for hydrodynamics schemes: a polynomial of degree $k$ is reconstructed from the evolved quantities (cell averages). To this end, the accuracy of the numerical method depends in part on the degree of the reconstructed polynomial, and the desired polynomial degree impacts the width of the computational stencil, since values in $k+1$ cells are needed to reconstruct a polynomial of degree $k$. The most commonly used methods are monotonized piecewise linear \citep{vanLeer74,Lev92} and piecewise parabolic methods \citep{CoWo84}, as well as higher order monotonicity preserving (MP) \citep{SuHu97} and weighted essentially nonoscillatory (WENO) reconstruction methods \citep{LiOsCh94,Shu97}. Monotonicity constraints are placed on the reconstruction polynomial to ensure nonoscillatory solutions around discontinuities. For fluid variables, the numerical flux function can be computed with a standard Riemann solver; e.g., HLL \citep{HaLaLe83} or HLLC \citep{ToSpSp94}. However, when using finite-volume or finite-difference methods to solve for the radiation moments, specification of the numerical flux requires additional care. As elucidated by the analysis in \citet{AuChCh02} in the context of the $\mathcal{O}(v/c)$ limit of the energy integrated (gray) Lagrangian two-moment model presented in Sect.~\ref{sec:TwoMoment}, in the asymptotic diffusion limit (characterized by a short neutrino mean free path) the inherent numerical dissipation associated with the numerical flux used for hyperbolic systems overwhelms the physical radiative diffusive flux and leads to spurious evolution unless the mean free path is resolved by the spatial grid. We discuss this important issue further below \citep[see also][for discussions on this topic]{JiLe96,LoMo01}. Since it is not practical to resolve the neutrino mean free path in core-collapse supernova simulations, the numerical fluxes for the radiation moment equations are modified to better capture the evolution in diffusive regimes. Following \citet{AuChCh02}, \citet{OcOt13} propose the following modified HLL numerical fluxes for the two-moment model for neutrino transport \citep[see also][]{KuTaKo16}: \begin{align} \widehat{F_{\mathcal{E}_{s}}^{x}}\big(\mathbf{U}_{\mbox{\tiny{\sc L}}},\mathbf{U}_{\mbox{\tiny{\sc R}}}\big) &=\f{\lambda^{+}F_{\mathcal{E}_{s}}^{x}(\mathbf{U}_{\mbox{\tiny{\sc L}}})+\lambda^{-}F_{\mathcal{E}_{s}}^{x}(\mathbf{U}_{\mbox{\tiny{\sc R}}})-\xi\lambda^{-}\lambda^{+}\big((\mathcal{E}_{s})_{\mbox{\tiny{\sc R}}}-(\mathcal{E}_{s})_{\mbox{\tiny{\sc L}}}\big)}{\lambda^{-}+\lambda^{+}} \label{eq:modifiedNumericalFluxEnergy} , \\ \widehat{F_{\mathcal{S}_{s,j}}^{x}}\big(\mathbf{U}_{\mbox{\tiny{\sc L}}},\mathbf{U}_{\mbox{\tiny{\sc R}}}\big) &=\f{\xi^{2}\big(\lambda^{+}F_{\mathcal{S}_{s,j}}^{x}(\mathbf{U}_{\mbox{\tiny{\sc L}}})+\lambda^{-}F_{\mathcal{S}_{s,j}}^{x}(\mathbf{U}_{\mbox{\tiny{\sc R}}})\big)-\xi\lambda^{-}\lambda^{+}\big((\mathcal{S}_{s,j})_{\mbox{\tiny{\sc R}}}-(\mathcal{S}_{s,j})_{\mbox{\tiny{\sc L}}}\big)}{\lambda^{-}+\lambda^{+}} \nonumber \\ &\hspace{12pt} +(1-\xi^{2})\,\f{1}{2}\,\big(\,F_{\mathcal{S}_{s,j}}^{x}(\mathbf{U}_{\mbox{\tiny{\sc L}}})+F_{\mathcal{S}_{s,j}}^{x}(\mathbf{U}_{\mbox{\tiny{\sc R}}})\,\big), \label{eq:modifiedNumericalFluxMomentum} \end{align} where $F_{\mathcal{E}_{s}}^{x}$ and $F_{\mathcal{S}_{s,j}}^{x}$ are the radiation energy and momentum spatial fluxes, respectively, and $\lambda^{-}$ and $\lambda^{+}$ are estimates of the largest (absolute) eigenvalues for left-going and right-going waves, respectively \cite[see, e.g.,][for explicit expressions of estimates]{ShKiSe11}. In the modified numerical fluxes in Eqs.~\eqref{eq:modifiedNumericalFluxEnergy} and \eqref{eq:modifiedNumericalFluxMomentum}, $\xi$ is a local parameter depending on the ratio of the neutrino mean free path to the local grid size: \begin{equation} \xi = \min\big(1,\lambda_{ij}/\Delta x_{j}\big), \end{equation} where $\lambda_{ij}$ is a local, energy-dependent neutrino mean free path (computed from the neutrino opacities). Thus, when the mean free path is much smaller than a grid cell ($\xi\to0$), the numerical dissipation term (proportional to the jump in the conserved variables across the interface) vanishes, and the numerical flux switches to an average of the fluxes evaluated with the left and right states (a similar approach is also taken in \citet{JuObJa15,SkDoBu19}). It should be noted that the average flux is appropriate for solving parabolic equations, but is in general unstable for hyperbolic equations \citep[e.g.,][]{Lev92}. To further illustrate the issue with the numerical flux, and to see how the modifications in Eqs.~\eqref{eq:modifiedNumericalFluxEnergy}--\eqref{eq:modifiedNumericalFluxMomentum} help, it is easiest to consider the reduced system \begin{align} \pd{\mathcal{J}}{t}+\pd{\mathcal{H}}{x} &= 0, \label{eq:reducedTwoMomentEnergyEquation} \\ \pd{\mathcal{H}}{t}+\pd{\mathcal{K}}{x} &=-\f{1}{\lambda}\,\mathcal{H} \label{eq:reducedTwoMomentMomentumEquation}, \end{align} where \begin{equation} \big\{\,\mathcal{J},\mathcal{H},\mathcal{K}\,\big\}(x,t) = \f{1}{2}\int_{-1}^{1}f(\mu,x,t)\,\mu^{\{0,1,2\}}\,d\mu, \end{equation} and $\lambda$ is the scattering mean free path. When scattering events are frequent ($\lambda\to0$), the system in Eqs.~\eqref{eq:reducedTwoMomentEnergyEquation}--\eqref{eq:reducedTwoMomentMomentumEquation} limits to parabolic behavior governed by \begin{equation} \pd{\mathcal{J}}{t}+\pd{\mathcal{H}}{x}=0 \quad\text{and}\quad \mathcal{H}=-\f{\lambda}{3}\pd{\mathcal{J}}{x} \quad\Rightarrow\quad \pd{\mathcal{J}}{t}-\f{\lambda}{3}\,\partial_{xx}\mathcal{J}=0, \label{eq:simpleDiffusionLimit} \end{equation} which is referred to as the diffusion limit. The semi-discrete form of Eqs.~\eqref{eq:reducedTwoMomentEnergyEquation}--\eqref{eq:reducedTwoMomentMomentumEquation} can be written as \begin{align} d_{t}\mathcal{J}_{i}+\f{1}{\Delta x}\Big(\,\widehat{\mathcal{H}}_{i+\f{1}{2}}-\widehat{\mathcal{H}}_{i-\f{1}{2}}\,\Big)&=0, \label{eq:reducedTwoMomentEnergyEquationSemiDiscrete} \\ d_{t}\mathcal{H}_{i}+\f{1}{\Delta x}\Big(\,\widehat{\mathcal{K}}_{i+\f{1}{2}}-\widehat{\mathcal{K}}_{i-\f{1}{2}}\,\Big)&=-\f{1}{\lambda}\,\mathcal{H}_{i}. \label{eq:reducedTwoMomentMomentumEquationSemiDiscrete} \end{align} With constant reconstruction, which results in first-order spatial accuracy, the numerical fluxes in Eqs.~\eqref{eq:modifiedNumericalFluxEnergy}--\eqref{eq:modifiedNumericalFluxMomentum} at the $x_{i+\f{1}{2}}$ interface become \begin{align} \widehat{\mathcal{H}}_{i+\f{1}{2}} &=\f{1}{2}\Big(\,\mathcal{H}_{i+1}+\mathcal{H}_{i}-\xi\,\big(\,\mathcal{J}_{i+1}-\mathcal{J}_{i}\,\big)\,\Big), \label{eq:modifiedNumericalFluxEnergySimple} \\ \widehat{\mathcal{K}}_{i+\f{1}{2}} &=\f{1}{2}\Big(\,\mathcal{K}_{i+1}+\mathcal{K}_{i}-\xi\,\big(\,\mathcal{H}_{i+1}-\mathcal{H}_{i}\,\big)\,\Big), \label{eq:modifiedNumericalFluxMomentumSimple} \end{align} where for simplicity we set $\lambda^{+}=\lambda^{-}=1$ (i.e., the global Lax-Friedrichs flux). By ignoring the time derivative term in Eq.~\eqref{eq:reducedTwoMomentMomentumEquationSemiDiscrete} and using the numerical flux in Eq.~\eqref{eq:modifiedNumericalFluxMomentumSimple} with $\mathcal{K}=\mathcal{J}/3$, one can write \begin{align} \mathcal{H}_{i} &= - \mathrm{Kn}\,\f{1}{2}\,\Big(\,\f{1}{3}\,\big(\,\mathcal{J}_{i+1}-\mathcal{J}_{i-1}\,\big)-\xi\,\big(\,\mathcal{H}_{i-1}-2\,\mathcal{H}_{i}+\mathcal{H}_{i+1}\,\big)\,\Big), \nonumber \\ &\approx - \mathrm{Kn}\,\f{1}{2}\,\f{1}{3}\,\big(\,\mathcal{J}_{i+1}-\mathcal{J}_{i-1}\,\big), \label{eq:momentumDensitySimpleApproximate} \end{align} where we have introduced the Knudsen number $\mathrm{Kn}=\lambda/\Delta x$, the ratio of the mean free path to the spatial grid size. In Eq.~\eqref{eq:momentumDensitySimpleApproximate}, we ignored the numerical dissipation term because in the diffusion limit $|\mathcal{H}|\ll\mathcal{J}$. Then, inserting the numerical flux, Eq.~\eqref{eq:modifiedNumericalFluxEnergySimple}, using Eq.~\eqref{eq:momentumDensitySimpleApproximate}, into Eq.~\eqref{eq:reducedTwoMomentEnergyEquationSemiDiscrete} gives the approximate semi-discrete form of Eq.~\eqref{eq:reducedTwoMomentEnergyEquation} in the diffusion limit: \begin{align} &d_{t}\mathcal{J}_{i} -\f{1}{(2\Delta x)^{2}} \Big[\, \f{\lambda}{3}\Big(\mathcal{J}_{i-2}-2\,\mathcal{J}_{i}+\mathcal{J}_{i+2}\Big) \nonumber \\ &\hspace{72pt} +\min(\lambda,\Delta x)\Big(\mathcal{J}_{i-1}-2\,\mathcal{J}_{i}+\mathcal{J}_{i+1}\Big) \,\Big]=0, \label{eq:simpleDiffusionLimitSemiDiscrete} \end{align} which is an approximation to the diffusion equation in Eq.~\eqref{eq:simpleDiffusionLimit}. Note that the last term on the left-hand side of Eq.~\eqref{eq:simpleDiffusionLimitSemiDiscrete} is due to the numerical dissipation term (proportional to $\xi$) in Eq.~\eqref{eq:modifiedNumericalFluxEnergySimple}. Because of the introduction of $\xi$ in Eq.~\eqref{eq:modifiedNumericalFluxEnergySimple}, Eq.~\eqref{eq:simpleDiffusionLimitSemiDiscrete} remains a reasonable approximation to a diffusion equation with the correct diffusion coefficient $\lambda/3$, even as $\lambda\ll\Delta x$. Without the modification to the numerical flux (i.e., $\xi=1$ independent of $\lambda$), we would obtain Eq.~\eqref{eq:simpleDiffusionLimitSemiDiscrete} with $\min(\lambda,\Delta x)\to\Delta x$. In this case the numerical diffusion term would overwhelm the physical diffusion term when $\lambda\ll\Delta x$, and result in spurious evolution. Note, in this simplified discussion, where we assumed constant spatial reconstruction, the numerical dissipation term is of the same order of magnitude as the physical dissipation term, and still contributes to the diffusive evolution. With higher-order accurate spatial reconstruction, the relative contribution of this term decreases. Also note that in arriving at Eq.~\eqref{eq:simpleDiffusionLimitSemiDiscrete}, we only relied on the modification to the numerical flux in the energy equation, as is done by \citet{SkDoBu19}. Finally, note that in the physical diffusion term in Eq.~\eqref{eq:simpleDiffusionLimitSemiDiscrete}, the second derivative is approximated with a wide stencil, which supports a mode with odd-even point decoupling \citep{LoMo01}. \subsubsection{Energy discretization} \label{sec:EnergyDiscretization} Next we consider the approximation of the energy fluxes in Eq.~\eqref{eq:fluxEnergy}, which contribute to shifts in the neutrino energy spectrum due to gravitational and moving fluid effects. \citet{MuJaDi10}, who solved the Lagrangian two-moment model in Section~\eqref{sec:TwoMoment}, developed a method to compute the energy fluxes that is inherently number conservative; i.e., with this discretization of the energy derivative, the energy equation in the Lagrangian two-moment model in Eq.~\eqref{eq:spectralLagrangianEnergyEquation_3p1} is consistent with the equation for number conservation in Eq.~\eqref{eq:spectralNumberEquation_3p1} at the discrete level. A key observation in achieving this is that the number conservation equation is obtained by multiplying the Lagrangian energy equation with a factor $1/\varepsilon$. At the continuum level, when this factor is brought inside the energy derivative, the remainder cancels with the first term on the right-hand side of Eq.~\eqref{eq:spectralLagrangianEnergyEquation_3p1}, resulting in the conservative number equation in Eq.~\eqref{eq:spectralNumberEquation_3p1}. The relevant equation is given by considering only the energy derivative and the (non-collisional) source term in Eq.~\eqref{eq:spectralLagrangianEnergyEquation_3p1} \citep[cf.\ Eq.~(B1) in][]{MuJaDi10}: \begin{equation} \pd{J}{t}+\pd{}{\varepsilon}\big(\,\varepsilon\,F_{J}\,\big)=F_{J}, \label{eq:energyAdvectionEquation} \end{equation} where we introduce the shorthand notation \begin{align} J = \sqrt{\gamma}\,\varepsilon^{2}\,\big(\,W\mathcal{J}+v^{i}\mathcal{H}_{i}\,\big) \quad\text{and}\quad F_{J} = - \alpha\,\sqrt{\gamma}\,\varepsilon^{2}\,\mathcal{T}^{\mu\nu}\nabla_{\mu}u_{\nu}. \end{align} Dividing Eq.~\eqref{eq:energyAdvectionEquation} by $\varepsilon$ gives the conservation equation: \begin{equation} \pd{N}{t}+\pd{}{\varepsilon}\big(\,F_{J}\,\big)=0, \label{eq:numberAdvectionEquation} \end{equation} where $N=J/\varepsilon$ is the spectral Eulerian number density (cf.\ Eq.~\eqref{eq:spectralNumberEquation_3p1}). Similar to Eq.~\eqref{eq:fluidTwoMomentSystemSemiDiscrete}, the semi-discrete form of Eq.~\eqref{eq:energyAdvectionEquation} can be written as \begin{equation} \deriv{J_{i}}{t} =-\f{1}{\Delta \varepsilon_{i}}\big(\,\varepsilon_{i+\f{1}{2}}{\widehat{F_{J}}}_{i+\f{1}{2}}-\varepsilon_{i-\f{1}{2}}{\widehat{F_{J}}}_{i-\f{1}{2}}\,\big) + {F_{J}}_{i}, \label{eq:energyAdvectionEquationSemiDiscrete} \end{equation} where ${\widehat{F_{J}}}_{i\pm\f{1}{2}}$ are the numerical flux functions to be determined. (Here we drop the spatial index $j$ to simplify the notation.) Dividing Eq.~\eqref{eq:energyAdvectionEquationSemiDiscrete} by $\varepsilon_{i}$ and defining $N_{i}=J_{i}/\varepsilon_{i}$ gives a provisionary semi-discrete form of Eq.~\eqref{eq:numberAdvectionEquation}: \begin{align} \deriv{N_{i}}{t} &=-\f{1}{\Delta \varepsilon_{i}} \big(\, \f{\varepsilon_{i+\f{1}{2}}}{\varepsilon_{i}}{\widehat{F_{J}}}_{i+\f{1}{2}} -\f{\varepsilon_{i-\f{1}{2}}}{\varepsilon_{i}}{\widehat{F_{J}}}_{i-\f{1}{2}} \,\big) + \f{{F_{J}}_{i}}{\varepsilon_{i}} \label{eq:numberAdvectionEquationSemiDiscrete} \\ &=-\f{1}{\Delta \varepsilon_{i}} \big(\, {\widehat{F_{J}}}_{i+\f{1}{2}} - {\widehat{F_{J}}}_{i+\f{1}{2}} \,\big) -\f{(\varepsilon_{i+\f{1}{2}}-\varepsilon_{i})}{\Delta \varepsilon_{i}}\f{{\widehat{F_{J}}}_{i+\f{1}{2}}}{\varepsilon_{i}} -\f{(\varepsilon_{i}-\varepsilon_{i-\f{1}{2}})}{\Delta \varepsilon_{i}}\f{{\widehat{F_{J}}}_{i-\f{1}{2}}}{\varepsilon_{i}} + \f{{F_{J}}_{i}}{\varepsilon_{i}}. \nonumber \end{align} Without specifying the numerical fluxes $\widehat{F_{J}}_{i\pm\f{1}{2}}$, the last three terms in the second line of Eq.~\eqref{eq:numberAdvectionEquationSemiDiscrete} do in general not cancel, and the neutrino number density is not conserved in the energy advection step, which is contrary to what is suggested by Eq.~\eqref{eq:numberAdvectionEquation}. However, there is some freedom in choosing the numerical fluxes. To determine the numerical fluxes, \citet{MuJaDi10} demand total number conservation upon integration of Eq.~\eqref{eq:numberAdvectionEquationSemiDiscrete} over all energy bins; i.e., \begin{align} 0=d_{t}N_{\mbox{\tiny{\sc Tot}}} &\equiv\sum_{i=1}^{N_{\varepsilon}}\deriv{N_{i}}{t}\,\Delta \varepsilon_{i} =-\sum_{i=1}^{N_{\varepsilon}} \Big\{\,\f{\varepsilon_{i+\f{1}{2}}}{\varepsilon_{i}}{\widehat{F_{J}}}_{i+\f{1}{2}} -\f{\varepsilon_{i-\f{1}{2}}}{\varepsilon_{i}}{\widehat{F_{J}}}_{i-\f{1}{2}} -\f{\Delta \varepsilon_{i}}{\varepsilon_{i}}{F_{J}}_{i}\,\Big\} \nonumber \\ &=-\sum_{i=1}^{N_{\varepsilon}}\Big\{\,\Big(\,\f{1}{\varepsilon_{i}}-\f{1}{\varepsilon_{i+1}}\,\Big)\,\varepsilon_{i+\f{1}{2}}\,{\widehat{F_{J}}}_{i+\f{1}{2}}-\f{\Delta \varepsilon_{i}}{\varepsilon_{i}}\,{F_{J}}_{i}\,\Big\}, \end{align} where zero flux energy space boundaries are assumed (i.e., ${\widehat{F_{J}}}_{\f{1}{2}}={\widehat{F_{J}}}_{N_{\varepsilon}+\f{1}{2}}=0$). Next, the numerical flux is split into ``left'' and ``right'' contributions \begin{equation} {\widehat{F_{J}}}_{i+\f{1}{2}} = {F_{J}^{\mbox{\tiny{\sc L}}}}_{i} + {F_{J}^{\mbox{\tiny{\sc R}}}}_{i+1}, \label{eq:energyFluxSplit} \end{equation} so that the change in the total number density can be written as (assuming $\varepsilon_{\f{1}{2}}=0$ and setting ${F_{J}^{\mbox{\tiny{\sc R}}}}_{N_{\varepsilon}+1}=0$) \begin{equation} d_{t}N_{\mbox{\tiny{\sc Tot}}}=-\sum_{i=1}^{N_{\varepsilon}} \Big\{\, \Big(\f{1}{\varepsilon_{i}}-\f{1}{\varepsilon_{i+1}}\Big)\,\varepsilon_{i+\f{1}{2}}\,{F_{J}^{\mbox{\tiny{\sc L}}}}_{i} +\Big(\f{1}{\varepsilon_{i-1}}-\f{1}{\varepsilon_{i}}\Big)\,\varepsilon_{i-\f{1}{2}}\,{F_{J}^{\mbox{\tiny{\sc R}}}}_{i} -\f{\Delta \varepsilon_{i}}{\varepsilon_{i}}\,{F_{J}}_{i} \,\Big\}. \label{eq:totalNumberChange} \end{equation} Number conservation is then obtained by demanding \begin{equation} \Big(\f{1}{\varepsilon_{i}}-\f{1}{\varepsilon_{i+1}}\Big)\,\varepsilon_{i+\f{1}{2}}\,{F_{J}^{\mbox{\tiny{\sc L}}}}_{i} +\Big(\f{1}{\varepsilon_{i-1}}-\f{1}{\varepsilon_{i}}\Big)\,\varepsilon_{i-\f{1}{2}}\,{F_{J}^{\mbox{\tiny{\sc R}}}}_{i} =\f{\Delta \varepsilon_{i}}{\varepsilon_{i}}\,{F_{J}}_{i}. \end{equation} Furthermore, \citet{MuJaDi10} introduce \begin{align} \varepsilon_{i+\f{1}{2}}\,{F_{J}^{\mbox{\tiny{\sc L}}}}_{i} &=\f{\Delta \varepsilon_{i}}{1-\varepsilon_{i}\varepsilon_{i+1}^{-1}}\,{F_{J}}_{i}\,\xi_{i}, \\ \varepsilon_{i-\f{1}{2}}\,{F_{J}^{\mbox{\tiny{\sc R}}}}_{i} &=\f{\Delta \varepsilon_{i}}{\varepsilon_{i}\varepsilon_{i-1}^{-1}-1}\,{F_{J}}_{i}\,(1-\xi_{i}), \end{align} where $\xi_{i}$ is a local weighting factor \begin{equation} \xi_{i}=\f{j_{i+\f{1}{2}}^{\sigma}}{j_{i-\f{1}{2}}^{\sigma}+j_{i+\f{1}{2}}^{\sigma}} \quad\text{and}\quad 1-\xi_{i}=\f{j_{i-\f{1}{2}}^{\sigma}}{j_{i-\f{1}{2}}^{\sigma}+j_{i+\f{1}{2}}^{\sigma}} , \end{equation} depending on the zeroth moment ($j$) of the distribution function at cell interfaces, $j_{i-\f{1}{2}}^{\sigma}$ and $j_{i+\f{1}{2}}^{\sigma}$, which are computed as weighted geometric means of $j$ using values from adjacent energy bins. In regions where $J_{i}$ varies modestly with $i$, $\xi_{i}$ is close to $1/2$, while in the high-energy tail of the neutrino spectrum, where $J_{i}$ decreases rapidly with increasing $i$, $\xi_{i}\ll1$ \citep[see Appendix~B in][for further details]{MuJaDi10}. Then, using the split in Eq.~\eqref{eq:energyFluxSplit}, the numerical flux, e.g., at interface $\varepsilon_{i+\f{1}{2}}$, to be used in Eq.~\eqref{eq:energyAdvectionEquationSemiDiscrete} is given by \begin{align} \varepsilon_{i+\f{1}{2}}{\widehat{F_{J}}}_{i+\f{1}{2}} &=\varepsilon_{i+\f{1}{2}}\,{F_{J}^{\mbox{\tiny{\sc L}}}}_{i} + \varepsilon_{i+\f{1}{2}}\,{F_{J}^{\mbox{\tiny{\sc R}}}}_{i+1} \nonumber \\ &=\f{\Delta \varepsilon_{i}}{1-\varepsilon_{i}\varepsilon_{i+1}^{-1}}\,{F_{J}}_{i}\,\xi_{i} + \f{\Delta \varepsilon_{i+1}}{\varepsilon_{i+1}\varepsilon_{i}^{-1}-1}\,{F_{J}}_{i+1}\,(1-\xi_{i+1}). \label{eq:numericalFluxEnergySpaceGeneral} \end{align} For a commonly used geometrically progressing grid where $\varepsilon_{i+\f{1}{2}}=\Delta \varepsilon_{1}\,\lambda^{i-1}$ (where $\lambda>1$ and $i=1,\ldots,N_{\varepsilon}$), it can be shown that $\Delta \varepsilon_{i}/(1-\varepsilon_{i}\varepsilon_{i+1}^{-1})=\Delta \varepsilon_{i+1}/(\varepsilon_{i+1}\varepsilon_{i}^{-1}-1)=\varepsilon_{i+\f{1}{2}}$, so that the numerical flux can be written as \begin{equation} {\widehat{F_{J}}}_{i+\f{1}{2}}\big({F_{J}}_{i},{F_{J}}_{i+1}\big) = {F_{J}}_{i}\,\xi_{i} + {F_{J}}_{i+1}\,(1-\xi_{i+1}), \label{eq:numericalFluxEnergySpace} \end{equation} which is simply a weighted average with nonlinear weights $\xi_{i}$ and $(1-\xi_{i+1})$. If $\xi_{i},\xi_{i+1}>0$ and $\xi_{i}+\xi_{i+1}=1$, the numerical flux is a convex combination of ${F_{J}}_{i}$ and ${F_{J}}_{i+1}$, but this is not guaranteed. Although the numerical flux in Eq.~\eqref{eq:numericalFluxEnergySpaceGeneral} was developed by \citet{MuJaDi10} to ensure neutrino number conservation in the context of the Lagrangian two-moment model, the same approach has also been applied to the Eulerian two-moment model by \citet{OCon15,KuTaKo16}. (It is not at all clear that the approach developed by \citet{MuJaDi10} in the context of the Lagrangian two-moment model results in a number conservative scheme when applied to the Eulerian two-moment model. In the Lagrangian two-moment model, the spectral neutrino number and energy equations are related simply by a factor of $1/\epsilon$, whereas in an Eulerian two-moment model, the relationship is more complex, involving both the spectral neutrino energy and momentum equations \citep[cf.][]{EnCaMe12c,CaEnMe13a}.) We also note that the numerical flux in Eq.~\eqref{eq:numericalFluxEnergySpaceGeneral} is also used by \citet{JuObJa15}, who solve the Lagrangian two-moment model in the $\mathcal{O}(v/c)$ limit. A few remarks should be made about the numerical flux in Eq.~\eqref{eq:numericalFluxEnergySpace}. First, a numerical flux is said to be consistent if, when the two arguments are set to be equal, it reduces to the common value; i.e., when ${F_{J}}_{i}={F_{J}}_{i+1}={F_{J}}$ the following holds: \begin{equation} {\widehat{F_{J}}}_{i+\f{1}{2}}\big({F_{J}},{F_{J}}\big)={F_{J}}. \label{eq:consistentNumericalFlux} \end{equation} Consistency of the numerical flux is generally required for a numerical method to be convergent \citep{CrMa80,Lev02}. Since it is not guaranteed that $\xi_{i}+\xi_{i+1}=1$, the numerical flux in Eq.~\eqref{eq:numericalFluxEnergySpace} is not consistent. Second, if one sets $\xi_{i}=1/2~\forall i$ (which makes it consistent), the numerical flux in Eq.~\eqref{eq:numericalFluxEnergySpace} reduces to a simple arithmetic average, which is known to be notoriously unstable when combined with explicit time integration \citep[e.g.,][]{Lev02}. \citet{SkDoBu19}, who also solve the Lagrangian two-moment model in the $\mathcal{O}(v/c)$ limit, follow a different approach adapted from \citet{VaAuDu11}. In this case, assuming Cartesian coordinates for simplicity, the evolved quantity and the flux in energy space in the neutrino energy equation (cf.\ Eq.~\eqref{eq:energyAdvectionEquation}) are given by \begin{equation} J = \varepsilon^{2}\mathcal{J} \quad\text{and}\quad F_{J} = - \varepsilon^{2}\mathcal{K}^{i}_{\hspace{4pt}j}\,\pd{v^{j}}{i}, \end{equation} where $\mathcal{K}^{i}_{\hspace{4pt}j}$ is the radiation stress tensor (cf.\ Eq.~\eqref{eq:radiationStressTensor}) and $v^{i}$ are components of the fluid three-velocity. Similarly, the evolved quantity and flux in energy space from the neutrino momentum equation are given by \begin{align} H_{k} = \varepsilon^{2}\mathcal{H}_{k} \quad\text{and}\quad F_{H_{k}} = - \varepsilon^{2}\mathcal{L}^{i}_{\hspace{4pt}jk}\,\pd{v^{j}}{i}, \end{align} where $\mathcal{L}^{i}_{\hspace{4pt}jk}$ is the heat flux tensor in Eq.~\eqref{eq:radiationHeatFluxTensor}. With $\mathbf{u}=\big(\,J,H_{k}\,\big)^{T}$ and $\mathbf{f}^{\varepsilon}(\mathbf{u})=\big(\,H_{k},F_{H_{k}}\,\big)^{T}$, the subsystem to be solved is then given by \begin{equation} \pd{\mathbf{u}}{t}+\pd{\big(\,\varepsilon\,\mathbf{f}^{\varepsilon}(\mathbf{u})\big)}{\varepsilon} = 0, \end{equation} which is a familiar advection-type equation. For the energy equation, the numerical flux in energy space is then given by \begin{equation} {\widehat{F_{J}}}_{i+\f{1}{2}} = - \varepsilon_{i+\f{1}{2}}^{2}\,\widehat{\mathcal{K}}^{i}_{\hspace{4pt}j}(\varepsilon_{i+\f{1}{2}})\,\pd{v^{j}}{i}, \label{eq:numericalFluxEnergySpaceUpwind} \end{equation} where an upwind approach is used to compute \begin{equation} \widehat{\mathcal{K}}^{i}_{\hspace{4pt}j}(\varepsilon_{i+\f{1}{2}}) =\left\{ \begin{array}{rl} \mathcal{K}^{i}_{\hspace{4pt}j}(\varepsilon_{i+\f{1}{2}}^{-}), & \text{if } \pd{v^{j}}{i} < 0\\ \mathcal{K}^{i}_{\hspace{4pt}j}(\varepsilon_{i+\f{1}{2}}^{+}), & \text{if } \pd{v^{j}}{i} \ge 0. \end{array} \right. \end{equation} A similar expression is used for the energy-space fluxes in the radiation momentum equation. The eigenvalues of the flux Jacobian $\partial\mathbf{f}^{\varepsilon}/\partial\mathbf{u}$ associated with the reduced system of equations governing the ``advection'' in energy space are always of the same sign \citep{VaAuDu11}. This is one motivation for using the upwind flux. Although the numerical flux in Eq.~\eqref{eq:numericalFluxEnergySpaceUpwind} does not necessarily lead to exact number conservation (as is the case for the corresponding numerical flux developed by \citet{MuJaDi10}), the upwind flux has desirable properties that can improve numerical stability (e.g., the upwind flux is consistent and can be used to design monotone numerical schemes \citep[cf.][]{CrMa80,Lev92}). \subsubsection{Time integration approaches} After the specification of approximations to the terms on the right-hand side of Eq.~\eqref{eq:fluidTwoMomentSystemSemiDiscrete}, the system is evolved in time with an ODE solver. When solving the general relativistic radiation hydrodynamics system, \citet{KuTaKo16} write the resulting ODE system in the following form: \begin{equation} \deriv{\mathbf{U}}{t} + \mathbf{S}_{\mbox{\tiny adv,s}} + \mathbf{S}_{\mbox{\tiny avd,e}} + \mathbf{S}_{\mbox{\tiny grv}} + \mathbf{S}_{\nu\mbox{\tiny m}} = 0, \end{equation} where the spatial advection term $\mathbf{S}_{\mbox{\tiny adv,s}}$, the energy advection term $\mathbf{S}_{\mbox{\tiny avd,e}}$, the gravitational source term $\mathbf{S}_{\mbox{\tiny grv}}$, and the neutrino--matter interaction term $\mathbf{S}_{\nu\mbox{\tiny m}}$ correspond to the terms on the right-hand side of Eq.~\eqref{eq:fluidTwoMomentSystemSemiDiscrete}. (Here we omit phase-space indices for brevity.) In their time integration scheme, \citet{KuTaKo16} evaluate the spatial advection and gravitational source terms explicitly, while the energy advection and neutrino--matter interaction terms are evaluated implicitly: \begin{align} \f{\mathbf{U}^{*}-\mathbf{U}^{n}}{\Delta t} &+ \mathbf{S}_{\mbox{\tiny adv,s}}^{n} + \mathbf{S}_{\mbox{\tiny grv}}^{n} = 0, \label{eq:kurodaExplicitUpdate} \\ \f{\mathbf{U}^{n+1}-\mathbf{U}^{*}}{\Delta t} &+ \mathbf{S}_{\mbox{\tiny avd,e}}^{n+1} + \mathbf{S}_{\nu\mbox{\tiny m}}^{n+1} = 0. \label{eq:kurodaImplicitUpdate} \end{align} This splitting is a special case of a more general class of time integration methods referred to as implicit-explicit (IMEX) schemes \citep{AsRuSp97,PaRu05}. The splitting in Eqs.~\eqref{eq:kurodaExplicitUpdate}--\eqref{eq:kurodaImplicitUpdate} is first-order accurate in time, while high-order accurate methods have been developed. The main benefit of introducing this split is to avoid a distributed implicit solve, since the spatial advection term couples neighboring cells in space, which can reside on different processing units. On the downside, the time step is restricted by the speed of light, but this is acceptable for relativistic systems. In general, the neutrino--matter interaction term cannot be integrated efficiently in time with explicit methods because the stable time step needed to resolve the governing time scale is many orders of magnitude shorter than that governing the spatial advection term. There is another benefit of integrating the neutrino--matter interaction term separately with implicit methods. These terms are local in space, which makes them easier to parallelize. The energy advection term can be integrated with explicit or implicit methods. Using explicit methods for this term, an additional time step restriction is needed, but this is usually less severe than that introduced by the spatial advection term \citep[e.g.,][]{OCon15,JuObJa15}. On the other hand, since the neutrino--matter interaction term couple the entire momentum space, including the energy advection term in the implicit update \citep[as is also done by, e.g.,][]{MuJaDi10}, which only couples nearest neighbors in energy, does not add significantly to the computational complexity. One should note that in their Appendix~B, \citet{KuTaKo16} report significantly different electron fraction profiles when comparing explicit versus implicit integration of $\mathbf{S}_{\mbox{\tiny avd,e}}$, but the reason for this is not clear. The implicit solve in Eq.~\eqref{eq:kurodaImplicitUpdate} requires the solution of a nonlinear system of equations. To this end, \citet{KuTaKo16}, write the system as \begin{equation} \mathbf{f}(\mathbf{P}^{n+1}) \equiv \f{\mathbf{U}(\mathbf{P}^{n+1})-\mathbf{U}^{*}}{\Delta t} + \mathbf{S}_{\mbox{\tiny avd,e}}(\mathbf{P}^{n+1}) + \mathbf{S}_{\nu\mbox{\tiny m}}(\mathbf{P}^{n+1}) = 0, \label{eq:kurodaNonlinearSystem} \end{equation} where the unknowns are given by the vector of ``primitive'' variables: \begin{equation} \mathbf{P} = \big(\,\rho,\,v_{j},\,s,\,Y_{e},\,\mathcal{E}_{1},\mathcal{F}_{1,j},\ldots,\mathcal{E}_{N_{\mbox{\tiny{\sc Sp}}}},\mathcal{F}_{N_{\mbox{\tiny{\sc Sp}}},j}\,\big)^{T}. \label{eq:kurodaPrimitive} \end{equation} To solve the nonlinear system in Eq.~\eqref{eq:kurodaNonlinearSystem}, \citet{KuTaKo16} employ a Newton-Raphson scheme: \begin{equation} \pderiv{\mathbf{f}(\mathbf{P}^{k})}{\mathbf{P}}\delta\mathbf{P}^{k} = - \mathbf{P}^{k} \quad\rightarrow\quad \mathbf{P}^{k+1} = \mathbf{P}^{k} + \delta\mathbf{P}^{k} \end{equation} for $k=0,1,2,\ldots$, with $\mathbf{P}^{0}=\mathbf{P}^{*}$. The iteration is continued until $|\delta\mathbf{P}^{k}|<\mbox{tol}\,|\mathbf{P}^{k}|$, where the tolerance is typically set to $\mbox{tol}=10^{-8}$. \citet{KuTaKo16} treat the problem fully implicitly, evaluating the neutrino--matter interactions at $t^{n+1}$, and thus include derivatives of opacities in $\mathbf{S}_{\nu\mbox{\tiny m}}$ with respect to $\mathbf{P}$ in the Jacobian $(\partial\mathbf{f}/\partial\mathbf{P})$. To help convergence in the Newton-Raphson procedure, \citet{KuTaKo16} also monitor the change in total lepton number during iterations (see their Section~3.3 for details), which improves the robustness of the method. Note that in the primitive vector in Eq.~\eqref{eq:kurodaPrimitive} the radiation quantities are the Eulerian moments $\big(\mathcal{E},\mathcal{F}_{j}\big)$, while the closure and the neutrino--matter interaction terms are most naturally expressed in terms of the Lagrangian moments $\big(\mathcal{J},\mathcal{H}_{j}\big)$. To evaluate the closure and collision terms during the Newton-Raphson iterations, the Lagrangian moments are kept consistent with the Eulerian moments through the relations: \begin{align} \mathcal{J} &= u_{\mu}u_{\nu}\mathcal{T}^{\mu\nu} = W^{2}\,\mathcal{E} - 2\,W\,u_{i}\,\mathcal{F}^{i} + u_{i}u_{j}\mathcal{S}^{ij}, \\ \mathcal{H}_{j} &= - u_{\nu}\,h_{j\mu}\mathcal{T}^{\mu\nu} =\big(\,W\,\mathcal{E}-u_{k}\,\mathcal{F}^{k}\,\big)\,h_{j\mu}\,n^{\mu}+W\,h_{jk}\,\mathcal{F}^{k}-u_{i}\,h_{jk}\,\mathcal{S}^{ik}. \end{align} The number of iterations needed to reach convergence varies during a simulation. It is at its maximum in the center around core bounce (several tens), but settles down to $\sim10$ after the shock stalls. \citet{JuObJa15}, employing the $\mathcal{O}(v/c)$ limit of the Lagrangian two-moment model in Section~\eqref{sec:TwoMoment} coupled to non-relativistic hydrodynamics, also use a combination of explicit and implicit methods to integrate the coupled equations in time, but ease the computational cost by treating some interaction terms explicitly. They split the solution vector into radiation variables $\mathbf{X}=(\mathcal{J},\mathcal{H}_{j})$ and fluid variables $\mathbf{U}=(\rho,\rho Y_{e},\rho\mathbf{v},e_{\mathrm{t}})$, where the total fluid energy density is $e_{\mathrm{t}}=e_{\mathrm{i}}+\rho v^{2}/2$, and $e_{\mathrm{i}}$ is the internal energy density. They write the radiation hydrodynamics system as \begin{align} &\pd{\mathbf{X}}{t} + \big(\delta_{t}\mathbf{X}\big)_{\mathrm{hyp}} + \big(\delta_{t}\mathbf{X}\big)_{\mathrm{vel}} = \big(\delta_{t}\mathbf{X}\big)_{\mathrm{src}}, \\ &\pd{\mathbf{U}}{t} + \big(\delta_{t}\mathbf{U}\big)_{\mathrm{hyd}} = \big(\delta_{t}\mathbf{U}\big)_{\mathrm{src}}, \end{align} where in the transport equations, $\big(\delta_{t}\mathbf{X}\big)_{\mathrm{hyp}}$ represents the velocity-independent hyperbolic terms, $\big(\delta_{t}\mathbf{X}\big)_{\mathrm{vel}}$ represents all the velocity-dependent terms in the transport equations, and $\big(\delta_{t}\mathbf{X}\big)_{\mathrm{src}}$ represent neutrino--matter interactions. The phase-space advection terms combine to $\big(\delta_{t}\mathbf{X}\big)_{\mathrm{adv}}=\big(\delta_{t}\mathbf{X}\big)_{\mathrm{hyp}}+\big(\delta_{t}\mathbf{X}\big)_{\mathrm{vel}}$. In the hydrodynamics equations, $\big(\delta_{t}\mathbf{U}\big)_{\mathrm{hyd}}$ represents the non-radiative physics, while $\big(\delta_{t}\mathbf{U}\big)_{\mathrm{src}}$ the radiative source terms. For a given time step $\Delta t$, when advancing the system from $t^{n}$ to $t^{n+1}=t^{n}+\Delta t$, a `predictor' step to $t^{n+1/2}=t^{n}+\Delta t/2$ is performed first: \begin{align} \mathbf{X}^{n+\f{1}{2}} &=\mathbf{X}^{n} + \f{\Delta t}{2}\,\Big[\,-\big(\delta_{t}\mathbf{X}\big)_{\mathrm{hyp}}^{n}+\big(\delta_{t}\mathbf{X}\big)_{\mathrm{src}}^{n,n+\f{1}{2}}\,\Big], \label{eq:justRadiationPredictor} \\ \mathbf{U}^{n+\f{1}{2}} &=\mathbf{U}^{n} + \f{\Delta t}{2}\,\Big[\,-\big(\delta_{t}\mathbf{U}\big)_{\mathrm{hyd}}^{n}+\big(\delta_{t}\mathbf{U}\big)_{\mathrm{src}}^{n,n+\f{1}{2}}\,\Big], \label{eq:justHydroPredictor} \end{align} followed by the `corrector' step: \begin{align} \mathbf{X}^{n+1} &=\mathbf{X}^{n} + \Delta t\,\Big[\,-\big(\delta_{t}\mathbf{X}\big)_{\mathrm{hyp}}^{n+\f{1}{2}}+\big(\delta_{t}\mathbf{X}\big)_{\mathrm{src}}^{n+\f{1}{2},n+1}\,\Big], \label{eq:justRadiationCorrector} \\ \mathbf{U}^{n+1} &=\mathbf{U}^{n} + \Delta t\,\Big[\,-\big(\delta_{t}\mathbf{U}\big)_{\mathrm{hyd}}^{n+\f{1}{2}}+\big(\delta_{t}\mathbf{U}\big)_{\mathrm{src}}^{n+\f{1}{2},n+1}\,\Big], \label{eq:justHydroCorrector} \end{align} where double superscripts indicate that the source terms can be evaluated using radiation and hydrodynamics variables in the old and the new state. (The implicit neutrino--matter solve can be simplified considerably by time-lagging some terms. See the discussion below.) When comparing with the scheme of \citet{KuTaKo16} in Eqs.~\eqref{eq:kurodaExplicitUpdate}-\eqref{eq:kurodaImplicitUpdate}, the scheme used by \citet{JuObJa15} uses two explicit evaluations and two implicit evaluations, instead of one of each. Also note that \citet{KuTaKo16} treat the velocity-dependent terms implicitly in time, while these terms are treated explicitly by \citet{JuObJa15}. While being formally first-order accurate in time, it can be shown that the scheme in Eqs.~\eqref{eq:justRadiationPredictor}-\eqref{eq:justHydroCorrector} is second-order accurate with respect to the explicit part. Except for the use of both old and new variables in the implicit part, it is equivalent to the scheme presented by \citet{McEvLo08}. The prospect of evaluating some variables in the implicit neutrino--matter solve in the old state is potentially rewarding, since this part of the solve usually accounts for the majority of the computational cost in simulations. When doing this, stability and accuracy concerns are important to consider, and this could be investigated with rigorous analysis. Methods with time lagging can be considered unconverged or partially converged implicit methods, and can be quite accurate, but this depends on the chosen time step and the degree of nonlinearity of the problem \cite[see, e.g.,][]{KnRiOl01,Lowr04}. For stability of the explicit part of the IMEX scheme in Eqs.~\eqref{eq:justRadiationPredictor}-\eqref{eq:justHydroCorrector}, an upper bound on the time step is given by the advection time scale $\tau_{\mathrm{adv}}=\Delta x/c\approx 3~\mu\mbox{s}\times(\Delta x/1~\mbox{km})$. On the other hand, the neutrino--matter interaction time scale can be estimated as $\tau_{\mathrm{int}}=\lambda_{\nu}/c\approx10~\mbox{ns}\times(\lambda_{\nu}/3\times10^{-3}~\mbox{km})$, where $\lambda_{\nu}$ is the neutrino mean-free path (cf.\ Figure~\ref{fig:tmfp} in Sect.~\ref{sec:needForKineticDescription}). In the core of a core-collapse supernova, $\lambda_{\nu}\approx3\times10^{-3}$~km, so that $\tau_{\mathrm{int}}\ll\tau_{\mathrm{adv}}$, which implies that the neutrino--matter interactions terms should be integrated with implicit methods in order to keep $\Delta t/\tau_{\mathrm{adv}}=\mathcal{O}(1)$. However, $\tau_{\mathrm{int}}$ should be viewed as the time scale for neutrino--matter equilibration, and neutrinos have practically equilibrated with the matter for densities above $10^{12}\mathrm{\ g\ cm}^{-3}$. Since in near equilibrium, the matter quantities (i.e., $\rho$, $e_{\mathrm{i}}$, and the electron density $n_{e}$) evolve on time scales that typically exceed $\tau_{\mathrm{adv}}$, it is reasonable to ask whether some neutrino opacities, which depend nonlinearly on $\rho$, $e_{\mathrm{i}}$, and $n_{e}$, can be evaluated in a lagged fashion in order to avoid costly reevaluations during an iterative implicit solve. Numerical experiments can give valuable insights into this question. To this end, \citet{JuObJa15} considered three cases for comparison \begin{itemize} \item[(a)] The radiation moments $\mathbf{X}$ and the fluid variables $e_{\mathrm{i}}$ and $n_{e}$ appearing in the source terms $\big(\delta_{t}\mathbf{X}\big)_{\mathrm{src}}$ and $\big(\delta_{t}\mathbf{U}\big)_{\mathrm{src}}$ are defined at $t^{n+1}$. Only the Eddington and heat flux factors ($\mathfrak{k}$ and $\mathfrak{q}$) and the coefficients of the Legendre expansion of energy-coupling interactions (e.g., scattering; cf.\ Eq.~\eqref{eq:kernelExpansion}) are evaluated at $t^{n}$. \item[(b)] Like case (a), but $e_{\mathrm{i}}$ and $n_{e}$ in the source terms are evaluated at $t^{n}$ for all the opacities. This alleviates the computational cost of recomputing the opacities within the iteration procedure. Iterations are still performed in this case because the radiation moments appearing in the blocking factors are treated implicitly. \item[(c)] Like case (b), but all the energy-coupling interactions are treated explicitly in time. This renders the matrix to be inverted in the implicit solve diagonal. \end{itemize} Using case (b) where $\rho>10^{11}\mathrm{\ g\ cm}^{-3}$ and case (c) for $\rho\le10^{11}\mathrm{\ g\ cm}^{-3}$, \citet{JuObJa15} performed a detailed comparison of their scheme in spherical symmetry with results from \citet{LiRaJa05} (obtained with Boltzmann-based codes) for a $13\,M_{\odot}$ star, and found good agreement. In addition, they computed an additional run with the same physical specifications, but where case (b) was replaced with case (a) for $\rho>10^{11}\mathrm{\ g\ cm}^{-3}$, and found the results essentially unaltered (see their Fig.~11). See also \citet{JuBoJa18} for an extensive comparison of the two-moment method of \citet{JuObJa15} with the \textsc{Prometheus-Vertex} code \citep{RaJa02,BuRaJa06}, and on the impact of various approximate treatments of relevant physics. We also note that \citet{OCon15}, who also used explicit treatment of the matter quantities in evaluation the neutrino--matter sources, reported good agreement with \citet{LiRaJa05} across many quantities. After obtaining expressions for the radiation moments, the changes to the fluid momentum and kinetic energy densities due to neutrino--matter interactions are computed as \begin{align} \big(\delta_{t}\rho v_{j}\big)_{\mathrm{src}} &= -\sum_{\nu,\xi}\big(\delta_{t}\mathcal{H}_{j,\nu,\xi}\big)_{\mathrm{src}}, \\ \big(\delta_{t}e_{\mathrm{k}}\big)_{\mathrm{src}} &=-v^{j}\sum_{\nu,\xi}\big(\delta_{t}\mathcal{H}_{j,\nu,\xi}\big)_{\mathrm{src}}, \end{align} where the sums extend over all neutrino frequencies $\nu$ and species $\xi$, and the repeated index on the fluid velocity components $v^{j}$ imply summation over all spatial dimensions. \citet{SkDoBu19}, employing a very similar $\mathcal{O}(v/c)$ two-moment model as \citet{JuObJa15} coupled to non-relativistic hydrodynamics, also use explicit and implicit methods to integrate the coupled equations in time. They only describe their time integration scheme in the context of emission, absorption, and isotropic, isoenergetic scattering. \citet{SkDoBu19} write the radiation hydrodynamics system as \begin{equation} \pd{Q}{t}+\big(\,\mathcal{F}_{Q}^{i}\,\big)_{;i} = S_{\mbox{\tiny non-stiff}} + S_{\mbox{\tiny stiff}}, \label{eq:skinnerRadHydro} \end{equation} where the evolved quantities are $Q=\big(\rho,\rho v_{j},\rho e, \rho Y_{e},\mathcal{J},\mathcal{H}_{j}\big)$, where $e$ is the total specific energy of the gas, and $\mathcal{J}$ and $\mathcal{H}_{j}$ are respectively the comoving frame spectral radiation energy density and momentum density, representing all species and groups. Components of $\mathcal{J}$ and $\mathcal{H}_{j}$ are denoted $\mathcal{J}_{sg}$ and $\mathcal{H}_{j,sg}$, where $s$ denotes neutrino species and $g$ denotes frequency group. In Eq.~\eqref{eq:skinnerRadHydro}, $\big(\,\mathcal{F}_{Q}^{i}\,\big)_{;i}$ and $S_{\mbox{\tiny non-stiff}}$ represent terms from the phase-space advection operator, while $S_{\mbox{\tiny stiff}}$ represents neutrino--matter interactions. \citet{SkDoBu19} use operator splitting to integrate the coupled system of equations. The phase-space advection terms are integrated with the optimal second-order SSP-RK scheme of \citet{ShOs88}, while the update due to neutrino--matter interactions is followed by a backward Euler solve. This scheme applied to Eq.~\eqref{eq:skinnerRadHydro} can be written as \begin{align} Q^{(1)} &= Q^{n} + \Delta t\,\Big\{\,-\big(\,\mathcal{F}_{Q}^{i}\,\big)_{;i}^{n} + S_{\mbox{\tiny non-stiff}}^{n}\,\Big\}, \label{eq:skinnerRK1} \\ Q^{-} &=\f{1}{2}\,Q^{n} + \f{1}{2}\,\Big[\,Q^{(1)} + \Delta t\,\Big\{\,-\big(\,\mathcal{F}_{Q}^{i}\,\big)_{;i}^{(1)} + S_{\mbox{\tiny non-stiff}}^{(1)}\,\Big\}\,\Big], \label{eq:skinnerRK2} \\ Q^{n+1} &=Q^{-} + \Delta t\,S_{\mbox{\tiny stiff}}^{n+1}, \label{eq:skinnerImplicit} \end{align} which requires two evaluations of $\big(\,\mathcal{F}_{Q}^{i}\,\big)_{;i}$ and $S_{\mbox{\tiny non-stiff}}$ and one implicit solve to evaluate $S_{\mbox{\tiny stiff}}$ per time step. After the explicit update in Eqs.~\eqref{eq:skinnerRK1}--\eqref{eq:skinnerRK2}, a nested iteration scheme is employed, where for each spatial point, the coupled system \begin{align} \f{u^{n+1}-u^{-}}{\Delta t} &=-\sum_{s}\sum_{g}\big(j_{sg}^{n+1}-\kappa_{sg}^{n+1}\mathcal{J}_{sg}^{n+1}\big), \label{eq:skinnerFluidEnergyImplicit} \\ \rho\f{\big(Y_{e}^{n+1}-Y_{e}^{-}\big)}{\Delta t} &=\sum_{s}\sum_{g}\xi_{sg}\big(j_{sg}^{n+1}-\kappa_{sg}^{n+1}\mathcal{J}_{sg}^{n+1}\big), \label{eq:skinnerElectronFractionImplicit} \\ \f{\mathcal{J}_{sg}^{n+1}-\mathcal{J}_{sg}^{-}}{\Delta t} &=j_{sg}^{n+1}-\kappa_{sg}^{n+1}\mathcal{J}_{sg}^{n+1}, \label{eq:skinnerRadiationEnergyImplicit} \end{align} is solved for the material internal energy density, $u$ and electron fraction, $Y_e$---or equivalently, the temperature, $T$, and $Y_{e}$---and the spectral radiation energy density, $\mathcal{J}$. In Eqs.~\eqref{eq:skinnerFluidEnergyImplicit}--\eqref{eq:skinnerRadiationEnergyImplicit}, $j_{sg}$ and $\kappa_{sg}$ are the emission and absorption coefficients (depending on $\rho$, which is fixed in this step, $T$, and $Y_{e}$), and \begin{equation} \xi_{sg} =\left\{ \begin{array}{cc} -(N_{A}\,\nu)^{-1}, & s=\nu_{e} \\ +(N_{A}\,\nu)^{-1}, & s=\bar{\nu}_{e} \\ 0, & s=\nu_{x} \end{array} \right., \end{equation} where $N_{A}$ is Avogadro's number and $\nu$ is the neutrino frequency. In the nested iteration scheme, the updates are separated into ``inner'' and ``outer'' parts. In the $k$-th outer iteration, the radiation energy density is updated implicitly in the inner iteration as \begin{equation} \f{\mathcal{J}_{sg}^{k}-\mathcal{J}_{sg}^{-}}{\Delta t} = j_{sg}^{k-1}-\kappa_{sg}^{k-1}\mathcal{J}_{sg}^{k} \quad\Rightarrow\quad \mathcal{J}_{sg}^{k} = \f{\mathcal{J}_{sg}^{-}+\Delta t\,j_{sg}^{k-1}}{1+\Delta t\,\kappa_{sg}^{k-1}}, \end{equation} where the opacities and emissivities are evaluated using $T^{k-1}$ and $Y_{e}^{k-1}$ (as an initial guess in the first iteration $\{T^{0},Y_{e}^{0}\}=\{T^{-},Y_{e}^{-}\}$). The changes in energy and electron fraction are then computed as \begin{align} \Delta E^{k} &=\sum_{s}\sum_{g}\big(\mathcal{J}_{sg}^{k}-\mathcal{J}_{sg}^{-}\big), \\ \Delta Y_{e}^{k} &=\sum_{s}\sum_{g}\xi_{sg}\big(\mathcal{J}_{sg}^{k}-\mathcal{J}_{sg}^{-}\big), \end{align} and the residuals as \begin{align} r_{E}^{k} &= u^{k}-u^{-} + \Delta E^{k}, \\ r_{Y_{e}}^{k} &= \rho\,\big(Y_{e}^{k}-Y_{e}^{-}\big) - \Delta Y_{e}^{k}, \end{align} where $u^{k}=u(T^{k},Y_{e}^{k})$ (the internal energy also depends on $\rho$, which is fixed in this part of the solve). Then, using a Newton-Raphson technique, the temperature $T^{k}$ and electron fraction $Y_{e}^{k}$ are found such that $r_{E}^{k}=r_{Y_{e}}^{k}=0$. The iteration scheme is terminated when the relative change in temperature and electron fraction, $\delta T^{k}=|T^{k}-T^{k-1}|/T^{k-1}$ and $\delta Y_{e}^{k}=|Y_{e}^{k}-Y_{e}^{k-1}|/Y_{e}^{k-1}$, are below a specified tolerance (e.g., $10^{-6}$). In this sense, the converged solutions satisfy Eq.~\eqref{eq:skinnerFluidEnergyImplicit}--\eqref{eq:skinnerRadiationEnergyImplicit}. \citet{SkDoBu19} report that in practice their iteration procedure converges in a few iterations for a wide range of conditions. An obvious benefit of this nested approach is that nonlinear iterations are performed on a smaller system with only two unknowns ($T$ and $Y_{e}$). Note however that modifications to this algorithm are needed if energy coupling interactions such as scattering and pair processes are to be included in an implicit fashion as in cases (a) and (b) from \citet{JuObJa15} discussed above. After obtaining $u^{n+1}$, $Y_{e}^{n+1}$, $T^{n+1}$, and $\mathcal{J}_{sg}^{n+1}$ by solving Eqs.~\eqref{eq:skinnerFluidEnergyImplicit}-\eqref{eq:skinnerRadiationEnergyImplicit}, the radiation momentum density is updated implicitly as \begin{align} &\f{\mathcal{H}_{j,sg}^{n+1}-\mathcal{H}_{j,sg}^{-}}{\Delta t} = -\big(\kappa_{sg}^{n+1}+\sigma_{sg}^{n+1}\big)\,\mathcal{H}_{j,sg}^{n+1} \nonumber \\ &\Rightarrow \mathcal{H}_{j,sg}^{n+1} = \f{\mathcal{H}_{j,sg}^{-}}{1+\Delta t\,\big(\,\kappa_{sg}^{n+1}+\sigma_{sg}^{n+1}\,\big)}, \end{align} where $\sigma_{sg}$ is the scattering coefficient. Finally, the fluid momentum and kinetic energy densities ($(\rho v_{j})$ and $(\rho e_{\mathrm{k}})$, respectively) are updated as \begin{align} (\rho v_{j})^{n+1} &=(\rho v_{j})^{-} - \sum_{s}\sum_{g}\big(\mathcal{H}_{j,sg}^{n+1}-\mathcal{H}_{j,sg}^{-}\big), \label{eq:skinnerFluidMomentumUpdate} \\ (\rho e_{\mathrm{k}})^{n+1} &=(\rho e_{\mathrm{k}})^{-} - \sum_{s}\sum_{g}(v^{j})^{-}\big(\mathcal{H}_{j,sg}^{n+1}-\mathcal{H}_{j,sg}^{-}\big), \label{eq:skinnerFluidKineticEnergyUpdate} \end{align} where, in the last equation, the repeated index $j$ implies summation over spatial dimensions. The total energy density of the gas at $t^{n+1}$ is then obtained from \begin{align} (\rho e)^{n+1} &=u^{n+1} + (\rho e_{\mathrm{k}})^{n+1} \label{eq:skinnerTotalFluidEnergyUpdate} \\ &=\underbrace{\big(u^{-}+(\rho e_{\mathrm{k}})^{-}\big)}_{(\rho e)^{-}} -\sum_{s}\sum_{g}\big(\mathcal{J}_{sg}^{n+1}-\mathcal{J}_{sg}^{-}\big) - \sum_{s}\sum_{g}(v^{j})^{-}\big(\mathcal{H}_{j,sg}^{n+1}-\mathcal{H}_{j,sg}^{-}\big), \nonumber \end{align} where Eq.~\eqref{eq:skinnerFluidEnergyImplicit}, with Eq.~\eqref{eq:skinnerRadiationEnergyImplicit} inserted, and Eq.~\eqref{eq:skinnerFluidKineticEnergyUpdate} are used. Note that Eq.~\eqref{eq:skinnerTotalFluidEnergyUpdate} differs from the total energy update listed in \citet{SkDoBu19}; see their Eq.~(32), which is equivalent to Eq.~\eqref{eq:skinnerFluidKineticEnergyUpdate}, but with $\rho e_{\mathrm{k}}\to \rho e$. We believe Eq.~\eqref{eq:skinnerTotalFluidEnergyUpdate} is correct in this context since it accounts for changes in internal \emph{and} kinetic energy due to neutrino--matter interactions. \subsubsection{Lepton number and energy conservation} \label{sec:lepenergycons} We end this section on discretization techniques for two-moment models with a discussion on the topic of lepton number and energy conservation. These are conservation laws inherit in the system of equations evolved, and provide a crucial consistency check on the numerical solution. The challenges discussed here in the context of the two-moment model mirror the challenges discussed in Section (\ref{sec:relativisticEffectsAndConservationOfEnergy}) for Boltzmann transport. The concept of lepton number conservation is easily understood by considering Eqs.~\eqref{eq:ElectronNumberConservation3p1} and \eqref{eq:numberEquation_3p1}, which are evolution equations for the electron density and neutrino number density, respectively. The Eulerian electron number is given by $N_{e}=D\,Y_{e}/m_{\mbox{\tiny B}}=W\,n_{e}$, and the Eulerian neutrino lepton number density and lepton number flux density are \begin{equation} N_{\nu} = \sum_{s=1}^{N_{\mbox{\tiny{\sc Sp}}}}\mathsf{g}_{s}\,N_{s} \quad\text{and}\quad G_{\nu}^{i} = \sum_{s=1}^{N_{\mbox{\tiny{\sc Sp}}}}\mathsf{g}_{s}\,G_{s}^{i}, \end{equation} respectively. Then, combining Eq.~\eqref{eq:ElectronNumberConservation3p1}, using the source term in Eq.~\eqref{eq:electronfractionequationsourceterm}, with Eq.~\eqref{eq:numberEquation_3p1} results in the conservation law for the total lepton number $N_{\mbox{\tiny Lep}}=N_{e}+N_{\nu}$ \begin{equation} \f{1}{\alpha\sqrt{\gamma}} \big[\, \pd{}{t}\big(\,\sqrt{\gamma}\,N_{\mbox{\tiny Lep}}\,\big) +\pd{}{i}\big(\,\sqrt{\gamma}\,\big[\,\alpha\,G_{\mbox{\tiny Lep}}^{i}-\beta^{i}\,N_{\mbox{\tiny Lep}}\,\big]\,\big) \,\big] =0, \label{eq:leptonNumberConservation_3p1} \end{equation} where $G_{\mbox{\tiny Lep}}^{i} = N_{e}\,v^{i}+G_{\nu}^{i}$. A similar conservation statement for the total energy is not available in the relativistic case because the matter and neutrino equations governing the evolution of the four-momentum---Eqs.~\eqref{eq:fluidEnergyEquation3p1}, \eqref{eq:fluidMomentumEquation3p1}, \eqref{eq:EulerianEnergyEquation_3p1}, and \eqref{eq:EulerianMomentumEquation_3p1}---are not local conservation laws. Instead, the so-called ADM mass, $M_{\mbox{\tiny ADM}}$, \citep{BaSh10} (defined as a global quantity) is conserved. (See, e.g., \citet{KuTaKo16}, their Eq.~(71), for a definition applicable to the CCSN context.) In this case, conservation of the ADM mass can be monitored as a consistency check. \citet{KuTaKo16}, see their Figure~7, report violations of ADM mass conservation, $\Delta M_{\mbox{\tiny ADM}}$, (i.e., deviations from the initial value) of order $\Delta M_{\mbox{\tiny ADM}}\approx8\times10^{50}$~erg early after core bounce. \citet{MuJaDi10}, see their Figure~12, report violations of ADM mass conservation of similar magnitude in a simulation extending beyond $500$~ms after core bounce. In their simulation, $\Delta M_{\mbox{\tiny ADM}}$ jumps by about $5\times10^{50}$~erg at bounce, and keeps increasing more gradually to $\Delta M_{\mbox{\tiny ADM}}\approx2\times10^{51}$~erg at the end of the simulation. This change in the ADM mass is only about $0.5$\% relative to the initial value. \citet{MuJaDi10} argue that the velocity-dependent terms in the transport equations are the most critical terms responsible for the violation of energy (or ADM mass) conservation. To see this, it is illustrative to consider the equations they solve in the special relativistic limit with Cartesian coordinates and no neutrino--matter interactions. Neutrino--matter interactions are entirely local, and lepton number and four-momentum conservation in this sector can be enforced by constraints as in Eqs.~\eqref{eq:momentumConservationConstraint}-\eqref{eq:leptonNumberConservationConstraint}. The challenge stems from the discretization of the phase-space advection operators; i.e., the left-hand side of the moment equations. In the special relativistic limit with Cartesian coordinates and no neutrino--matter interactions, the Lagrangian two-moment model corresponding to the one used by \citet{MuJaDi10} is given by the energy equation (cf.\ Eq.~\eqref{eq:spectralLagrangianEnergyEquation_3p1}) \begin{equation} \pd{}{\nu}\big(\,\hat{\mathcal{J}}u^{\nu}+\hat{\mathcal{H}}^{\nu}\,\big) -\pd{}{\varepsilon}\big(\,\varepsilon\,\hat{\mathcal{T}}^{\mu\nu}\,\pd{u_{\mu}}{\nu}\,\big) =-\hat{\mathcal{T}}^{\mu\nu}\,\pd{u_{\mu}}{\nu} \label{eq:spectralLagrangianEnergyEquation_SR} \end{equation} and the momentum equation (cf.\ Eq.~\eqref{eq:spectralLagrangianMomentumEquation_3p1}) \begin{equation} \pd{}{\nu}\big(\,\hat{\mathcal{H}}_{j}\,u^{\nu}+\hat{\mathcal{K}}_{j}^{\hspace{2pt}\nu}\,\big) -\pd{}{\varepsilon}\big(\,h_{j\rho}\,\hat{\mathcal{Q}}^{\rho\mu\nu}\,\pd{u_{\mu}}{\nu}\,\big) =\hat{\mathcal{T}}^{\mu\nu}\,\pd{h_{j\mu}}{\nu}, \label{eq:spectralLagrangianMomentumEquation_SR} \end{equation} where the ``hat'' is used to denote that a factor $\varepsilon^{2}$ has been absorbed into the definition of the moments; i.e., \begin{equation} \big\{\,\hat{\mathcal{J}},\hat{\mathcal{H}}^{\nu},\hat{\mathcal{K}}^{\mu\nu},\ldots\,\big\} =\varepsilon^{2}\,\big\{\,\mathcal{J},\mathcal{H}^{\nu},\mathcal{K}^{\mu\nu},\ldots\,\big\}. \end{equation} Note that neither Eq.~\eqref{eq:spectralLagrangianEnergyEquation_SR} nor Eq.~\eqref{eq:spectralLagrangianMomentumEquation_SR} are local conservation laws. Therefore, a numerical method based on these equations requires care in the discretization process to achieve neutrino number, energy, and momentum conservation. (Neutrino energy \emph{and} momentum contribute to the ADM mass.) First, note that by dividing Eq.~\eqref{eq:spectralLagrangianEnergyEquation_SR} by $\varepsilon$ results in \begin{equation} \pd{\hat{\mathcal{N}}^{\nu}}{\nu} - \pd{}{\varepsilon}\big(\,\hat{\mathcal{T}}^{\mu\nu}\,\pd{u_{\mu}}{\nu}\,\big) = 0, \label{eq:spectralNumberEquation_SR} \end{equation} which is a local phase-space conservation law for the spectral number density. In arriving at Eq.~\eqref{eq:spectralNumberEquation_SR}, the remainder after bringing $\varepsilon^{-1}$ inside the energy derivative in Eq.~\eqref{eq:spectralLagrangianEnergyEquation_SR} cancels with the right-hand side. This is exactly what the discretization of the energy derivative term developed by \citet{MuJaDi10} (discussed in Sect.~\ref{sec:EnergyDiscretization}) is designed to do in order to achieve lepton number conservation. On the other hand, \begin{equation} -n_{\mu}\hat{\mathcal{T}}^{\mu\nu} = \big(\,\hat{\mathcal{E}}n^{\nu}+\hat{\mathcal{F}}^{\nu}\,\big) = W\,\big(\,\hat{\mathcal{J}}u^{\nu}+\hat{\mathcal{H}}^{\nu}\,\big) + v^{j}\,\big(\,\hat{\mathcal{H}}_{j}u^{\nu}+\hat{\mathcal{K}}_{j}^{\hspace{2pt}\nu}\,\big), \end{equation} where both the Eulerian and Lagrangian decompositions of $\hat{\mathcal{T}}^{\mu\nu}$ are used; cf.\ Eqs.~\eqref{eq:stressEnergyEulerianDecomposition} and \eqref{eq:stressEnergyLagrangianDecomposition}, respectively. Thus, by adding $W$ times Eq.~\eqref{eq:spectralLagrangianEnergyEquation_SR} and the contraction of $v^{j}$ with Eq.~\eqref{eq:spectralLagrangianMomentumEquation_SR} gives \begin{equation} \pd{}{\nu}\big(\,\hat{\mathcal{E}}n^{\nu}+\hat{\mathcal{F}}^{\nu}\,\big) -\pd{}{\varepsilon}\big(\,(-n_{\rho})\,\hat{\mathcal{Q}}^{\rho\mu\nu}\,\pd{u_{\mu}}{\nu}\,\big) = 0, \label{eq:spectralEnergyEquation_SR} \end{equation} which is a local phase-space conservation law for the spectral energy density. When arriving at Eq.~\eqref{eq:spectralEnergyEquation_SR}, the remainders after bringing $W$ inside the spacetime derivative in Eq.~\eqref{eq:spectralLagrangianEnergyEquation_SR} and $v^{j}$ inside the spacetime derivative of Eq.~\eqref{eq:spectralLagrangianMomentumEquation_SR} cancel with the terms due to the sources on the right-hand sides of Eqs.~\eqref{eq:spectralLagrangianEnergyEquation_SR} and \eqref{eq:spectralLagrangianMomentumEquation_SR} in a nontrivial way: \begin{align} &\big(\,\hat{\mathcal{J}}u^{\nu}+\hat{\mathcal{H}}^{\nu}\,\big)\,\pd{W}{\nu} +\big(\,\hat{\mathcal{H}}_{j}\,u^{\nu}+\hat{\mathcal{K}}_{j}^{\hspace{2pt}\nu}\,\big)\,\pd{v^{j}}{\nu} -\hat{\mathcal{T}}^{\mu\nu}\,\big(\,W\pd{u_{\mu}}{\nu}-v^{j}\pd{h_{j\mu}}{\nu}\,\big) \nonumber \\ &=-\big(\,u_{\mu}\,\pd{W}{\nu}-h_{j\mu}\,\pd{v^{j}}{\nu}+W\pd{u_{\mu}}{\nu}-v^{j}\pd{h_{j\mu}}{\nu}\,\big)\,\hat{\mathcal{T}}^{\mu\nu} \nonumber \\ &=-\pd{}{\nu}\big(\,W\,u_{\mu}-h_{j\mu}\,v^{j}\,\big)\,\hat{\mathcal{T}}^{\mu\nu} = - \hat{\mathcal{T}}^{\mu\nu}\,\pd{n_{\mu}}{\nu} = 0, \label{eq:spectralEnergyConservationConstraint} \end{align} since, in special relativity, $n_{\mu}=(-1,0,0,0)$. Similarly, \begin{equation} \gamma_{j\mu}\,\hat{\mathcal{T}}^{\mu\nu} = \big(\,\hat{\mathcal{F}}_{j}\,n^{\nu}+\hat{\mathcal{S}}_{j}^{\hspace{2pt}\nu}\,\big) = Wv_{j}\,\big(\,\hat{\mathcal{J}}u^{\nu}+\hat{\mathcal{H}}^{\nu}\,\big) + \big(\,\hat{\mathcal{H}}_{j}u^{\nu}+\hat{\mathcal{K}}_{j}^{\hspace{2pt}\nu}\,\big). \end{equation} Then, by adding $Wv_{j}$ times Eq.~\eqref{eq:spectralLagrangianEnergyEquation_SR} and Eq.~\eqref{eq:spectralLagrangianMomentumEquation_SR} one obtains \begin{equation} \pd{}{\nu}\big(\,\hat{\mathcal{F}}_{j}\,n^{\nu}+\hat{\mathcal{S}}_{j}^{\hspace{2pt}\nu}\,\big) -\pd{}{\varepsilon}\big(\,\gamma_{j\rho}\,\hat{\mathcal{Q}}^{\rho\mu\nu}\,\pd{u_{\mu}}{\nu}\,\big) = 0, \label{eq:spectralMomentumEquation_SR} \end{equation} which is a local conservation law for the spectral momentum density. Again, in arriving at Eq.~\eqref{eq:spectralMomentumEquation_SR}, the remainder after bringing $Wv_{j}$ inside the spacetime derivative in Eq.~\eqref{eq:spectralLagrangianEnergyEquation_SR} cancels with the sources in Eqs.~\eqref{eq:spectralLagrangianEnergyEquation_SR} and \eqref{eq:spectralLagrangianMomentumEquation_SR} in a nontrivial way: \begin{align} &\big(\,\hat{\mathcal{J}}u^{\nu}+\hat{\mathcal{H}}^{\nu}\,\big)\,\pd{}{\nu}\big(Wv_{j}\big) -Wv_{j}\,\hat{\mathcal{T}}^{\mu\nu}\,\pd{u_{\mu}}{\nu} + \hat{\mathcal{T}}^{\mu\nu}\,\pd{h_{j\mu}}{\nu} \nonumber \\ &=-\big(\,u_{\mu}\,\pd{}{\nu}\big(Wv_{j}\big) + Wv_{j}\,\pd{u_{\mu}}{\nu} - \pd{h_{j\mu}}{\nu}\,\big)\,\hat{\mathcal{T}}^{\mu\nu} \nonumber \\ &=-\pd{}{\nu}\big(\,Wv_{j}u_{\mu}-h_{j\mu}\,\big)\,\hat{\mathcal{T}}^{\mu\nu} =\hat{\mathcal{T}}^{\mu\nu}\,\pd{g_{j\mu}}{\nu}=0, \label{eq:spectralMomentumConservationConstraint} \end{align} since, in special relativity and with Cartesian coordinates, $\pd{g_{j\mu}}{\nu}=0$. Equations~\eqref{eq:spectralEnergyConservationConstraint} and \eqref{eq:spectralMomentumConservationConstraint} can be viewed as constraints. Since the discretizations of Eqs.~\eqref{eq:spectralLagrangianEnergyEquation_SR} and \eqref{eq:spectralLagrangianMomentumEquation_SR} are unlikely to satisfy these constraints, they are inconsistent with energy conservation in the sense of Eq.~\eqref{eq:spectralEnergyEquation_SR} and momentum conservation in the sense of Eq.~\eqref{eq:spectralMomentumEquation_SR}. In the fully relativistic case, one is faced with the same issue, namely that the discretization of the Lagrangian two-moment model (Eqs.~\eqref{eq:spectralLagrangianEnergyEquation_3p1} and \eqref{eq:spectralLagrangianMomentumEquation_3p1}) is to a certain degree inconsistent with the discretization of the Eulerian two-moment model (Eqs.~\eqref{eq:spectralEulerianEnergyEquation_3p1} and \eqref{eq:EulerianMomentumEquation_3p1}). Since it is the Eulerian moments that enter into the definition of the ADM mass, this inconsistency can propagate and manifest itself as violations of ADM mass conservation. On the other hand, by using the Eulerian two-moment model as the starting point for a numerical method---e.g., as in \citet{KuTaKo16}---it may be easier to control $\Delta M_{\mbox{\tiny ADM}}$. (The time evolution of the ADM mass reported by \citet{KuTaKo16} and \citet{MuJaDi10} are indeed quite different.) However, while the use of the Eulerian two-moment model may provide an advantage with regard to controlling energy conservation, one is still left with the equally challenging task of maintaining consistency with the number equation (Eq.~\eqref{eq:spectralNumberEquation_3p1}) and controlling lepton number conservation, as discussed in detail by \citet{CaEnMe13a}, and in this case violations of lepton number conservation in the sense of Eq.~\eqref{eq:leptonNumberConservation_3p1} may still result. We conclude this section by discussing number, energy, and momentum conservation in the context of the $\mathcal{O}(v/c)$ limit of the relativistic Lagrangian two-moment model discussed above, implemented by \citet{JuObJa15} and \citet{SkDoBu19}. (Note, we use units in which $c=1$.) The energy equation, Eq.~\eqref{eq:spectralLagrangianEnergyEquation_SR}, is then given by \begin{equation} \pd{}{t}\big(\,\hat{\mathcal{J}}+\Theta\,v^{i}\hat{\mathcal{H}}_{i}\,\big) + \pd{}{i}\big(\,\hat{\mathcal{H}}^{i}+v^{i}\hat{\mathcal{J}}\,\big) - \pd{}{\varepsilon}\big(\,\varepsilon\,\hat{\mathcal{K}}^{i}_{\hspace{4pt}k}\,\pd{v^{k}}{i}\,\big) = - \hat{\mathcal{K}}^{i}_{\hspace{4pt}k}\,\pd{v^{k}}{i}, \label{eq:spectralLagrangianEnergyEquation_VoverC} \end{equation} while the momentum equation, Eq.~\eqref{eq:spectralLagrangianMomentumEquation_SR}, is given by \begin{equation} \pd{}{t}\big(\,\hat{\mathcal{H}}_{j}+\Theta\,v^{i}\hat{\mathcal{K}}_{ij}\,\big) + \pd{}{i}\big(\,\hat{\mathcal{K}}^{i}_{\hspace{4pt}j}+v^{i}\hat{\mathcal{H}}_{j}\,\big) - \pd{}{\varepsilon}\big(\,\varepsilon\,\hat{\mathcal{L}}^{i}_{\hspace{4pt}kj}\,\pd{v^{k}}{i}\,\big) = - \hat{\mathcal{H}}^{i}\,\pd{v_{j}}{i}. \label{eq:spectralLagrangianMomentumEquation_VoverC} \end{equation} For simplicity, we ignore terms proportional to the time derivative of the fluid three-velocity, which is a reasonable approximation. In Eqs.~\eqref{eq:spectralLagrangianEnergyEquation_VoverC} and \eqref{eq:spectralLagrangianMomentumEquation_VoverC}, we introduced a constant parameter, $\Theta$, that is either zero or one. For $\Theta=0$, the two-moment model reduces to the one solved by \citet{JuObJa15} and by \citet{SkDoBu19}. However, when $\Theta=1$, as we will show below, the two-moment model is better aligned with number, energy, and momentum conservation. First, by dividing Eq.~\eqref{eq:spectralLagrangianEnergyEquation_VoverC} with the particle energy $\varepsilon$ and rearranging, one obtains \begin{equation} \pd{}{t}\big(\,\hat{\mathcal{D}}+\Theta\,v^{i}\hat{\mathcal{I}}_{i}\,\big) + \pd{}{i}\big(\,\hat{\mathcal{I}}^{i}+v^{i}\hat{\mathcal{D}}\,\big) - \pd{}{\varepsilon}\big(\,\hat{\mathcal{K}}^{i}_{\hspace{4pt}k}\,\pd{v^{k}}{i}\,\big) = 0, \label{eq:spectralNumberEquation_VoverC} \end{equation} which is a local conservation law for the spectral number density $\hat{\mathcal{D}}+\Theta\,v^{i}\hat{\mathcal{I}}_{i}$. Note that, when $\Theta=0$, it is the Lagrangian number density defined in Eq.~\eqref{eq:numberMomentsLagrangian} that is conserved, which is incorrect in the $\mathcal{O}(v/c)$ limit. On the other hand, when $\Theta=1$, Eq.~\eqref{eq:spectralNumberEquation_VoverC} is a conservation law for the $\mathcal{O}(v/c)$ approximation of the Eulerian number density defined in Eq.~\eqref{eq:eulerianNumberInTermsOfLagrangianMoments}, which is conserved. Next, we consider energy and momentum conservation. Following the approach in the relativistic case, by adding Eq.~\eqref{eq:spectralLagrangianEnergyEquation_VoverC} and the contraction of $v^{j}$ with Eq.~\eqref{eq:spectralLagrangianMomentumEquation_VoverC} one obtains \begin{align} &\pd{}{t}\big(\hat{\mathcal{J}}+(1+\Theta)\,v^{i}\hat{\mathcal{H}}_{i}\big) +\pd{}{i}\big(\hat{\mathcal{H}}^{i}+v^{i}\hat{\mathcal{J}}+v^{j}\hat{\mathcal{K}}^{i}_{\hspace{4pt}j}\big) \nonumber \\ &\hspace{12pt} - \pd{}{\varepsilon}\big(\,\varepsilon\,\hat{\mathcal{K}}^{i}_{\hspace{4pt}k}\,\pd{v^{k}}{i}\,\big) =\mathcal{O}(v^{2}), \label{eq:spectralEulerianEnergyEquation_VoverC} \end{align} which, to $\mathcal{O}(v/c)$, is a local conservation law for the Eulerian spectral energy density $\hat{\mathcal{J}}+(1+\Theta)\,v^{i}\hat{\mathcal{H}}_{i}$. With $\Theta=1$, this is the correct $\mathcal{O}(v/c)$ limit of the Eulerian energy density in Eq.~\eqref{eq:eulerianEnergyInTermsOfLagrangianMoments}. Terms of higher order in the fluid velocity have been moved to the right-hand side of Eq.~\eqref{eq:spectralEulerianEnergyEquation_VoverC}, which must remain small for the $\mathcal{O}(v/c)$ limit to be valid. Also note that, with $\Theta=0$, energy conservation breaks down to leading order in the fluid three-velocity (a factor of 2 should appear in the coefficient of the second term inside the parentheses of the time derivative). Similarly, by adding $v_{j}$ times Eq.~\eqref{eq:spectralLagrangianEnergyEquation_VoverC} and Eq.~\eqref{eq:spectralLagrangianMomentumEquation_VoverC} one obtains \begin{align} &\pd{}{t}\big(\hat{\mathcal{H}}_{j}+v_{j}\hat{\mathcal{J}}+\Theta\,v^{i}\hat{\mathcal{K}}_{ij}\big) + \pd{}{i}\big(\,\hat{\mathcal{K}}^{i}_{\hspace{4pt}j}+\hat{\mathcal{H}}^{i}v_{j}+v^{i}\hat{\mathcal{H}}_{j}\,\big) \\ &\hspace{12pt} - \pd{}{\varepsilon}\big(\varepsilon\,\hat{\mathcal{L}}^{i}_{\hspace{4pt}kj}\,\pd{v^{k}}{i}\big) =\mathcal{O}(v^{2}), \label{eq:spectralEulerianMomentumEquation_VoverC} \end{align} which, to $\mathcal{O}(v/c)$, is a local conservation law for the Eulerian spectral momentum density $\hat{\mathcal{H}}_{j}+v_{j}\hat{\mathcal{J}}+\Theta\,v^{i}\hat{\mathcal{K}}_{ij}$. Again, with $\Theta=1$, this is the correct $\mathcal{O}(v/c)$ limit of the Eulerian momentum density equation, Eq.~\eqref{eq:eulerianMomentumInTermsOfLagrangianMoments}. Thus, at the expense of some additional computational complexity, by letting $\Theta=1$ in Eqs.~\eqref{eq:spectralLagrangianEnergyEquation_VoverC} and \eqref{eq:spectralLagrangianMomentumEquation_VoverC}, the two-moment model becomes consistent with number, energy, and momentum conservation in the $\mathcal{O}(v/c)$ limit. \subsection{One-moment kinetics} \subsubsection{Newtonian-gravity, $O(v/c)$, finite-difference implementation} One moment kinetics is typically deployed in the context of neutrino transport in core-collapse supernovae using the multigroup (i.e., multi-frequency) flux-limited diffusion approximation (MGFLD). Such MGFLD approaches solve the neutrino and antineutrino moment equations for the zeroth moment of the distribution function, the multigroup neutrino/antineutrino energy density, with closure provided at the level of the first moment, the neutrino/antineutrino energy flux, via a diffusion-like equation, modified in such a way that the flux cannot be come superluminal (flux limiting). \citet{SwMy09} were the first to implement such an approach in axisymmetric simulations of core-collapse supernovae. The equations for the neutrino/antineutrino multigroup energy densities used by Swesty and Myra are expressed as \begin{equation}\label{eq:bte0} \frac{\partial E_{\epsilon}}{\partial t} + {\nabla} \cdot \left( E_{\epsilon} {\bf v} \right) + {\nabla} \cdot {\bf F}_{\epsilon} - \epsilon \frac{\partial}{\partial \epsilon} \left( {\mathsf P}_{\epsilon}: {\nabla} {\bf v} \right) = {\mathbb S}_{\epsilon}, \end{equation} \begin{equation}\label{eq:bte0bar} \frac{\partial \bar{E}_{\epsilon}}{\partial t} + {\nabla} \cdot \left( \bar{E}_{\epsilon} {\bf v} \right) + {\nabla} \cdot \bar{{\bf F}}_{\epsilon} - \epsilon \frac{\partial}{\partial \epsilon} \left( \bar{{\mathsf P}}_{\epsilon}: {\nabla} {\bf v} \right) = \bar{{\mathbb S}}_{\epsilon}, \end{equation} where $E_\epsilon$ and $\bar{E}_\epsilon$ are the neutrino and antineutrino energy densities per group, $P_\epsilon$ and $\bar{P}_\epsilon$ are the neutrino and antineutrino stress tensors, and ${\mathbb S}_\epsilon$ and $\bar{{\mathbb S}}_\epsilon$ are the neutrino and antineutrino matter couplings, respectively. The energy flux in both equations is given by a Fick's-like relation of the form \begin{equation}\label{eq:fick} {\bf F}_\epsilon \equiv -D_\epsilon {\nabla} E_\epsilon, \end{equation} where \begin{equation} \label{eq:diff_simple} D_\epsilon = \frac{c}{3\kappa^T_\epsilon} \end{equation} is the diffusion coefficient, and $\kappa_\epsilon^T$ is the total opacity. In flux-limited diffusion schemes, the diffusion coefficient $D_\epsilon$ is modified. A general form for such a modified diffusion coefficient is given by \begin{equation} \label{eq:lpd} D_\epsilon \equiv \frac{c \lambda_\epsilon({R_\epsilon})} {\kappa^T_\epsilon}. \end{equation} In particular, the so-called Levermore--Pomraning flux limiter \citep{LePo81} is given by \begin{equation} \label{eq:lpfl} \lambda_\epsilon({R_\epsilon}) \equiv \frac{2 + {R_\epsilon}} {6 + 3{R_\epsilon} + {R_\epsilon}^2}, \end{equation} where $R_\epsilon$ is the radiation Knudsen number, which is the ratio of the mean free path to some characteristic length scale in the problem. The Knudsen number is written as \begin{equation} {R_\epsilon} \equiv \frac{\left|{\nabla} E_\epsilon \right|}{\kappa^T_\epsilon E_\epsilon}. \label{eq:knudsen} \end{equation} Note that the Knudsen number is different for different energy groups given that the opacities are typically (and for neutrinos in core-collapse supernovae, definitely) energy dependent. The radiation stress tensor takes the typical form \begin{equation} \label{eq:edddef} {\mathsf{P}}_\epsilon \equiv {\mathsf X}_\epsilon E_\epsilon, \end{equation} where \begin{equation} \label{eq:chidef} {\mathsf X}_\epsilon \equiv \frac{1}{2} \left( 1-\chi_\epsilon \right) {\mathsf{I}} + \frac{1}{2} \left( 3 \chi_\epsilon - 1 \right) {\bf n}{\bf n}, \end{equation} where $\chi_\epsilon$ is the scalar Eddington factor, which in the case of the Levermore--Pomraning flux-limiting scheme becomes \begin{equation} \label{eq:chismdef} \chi_\epsilon = \lambda_\epsilon ({R_\epsilon}) + \left\{\lambda_\epsilon({R_\epsilon})\right\}^2 \: {R_\epsilon}^2. \end{equation} Given the choice of Levermore-Pomraning flux limiting, the evolution equations \eqref{eq:bte0} and \eqref{eq:bte0bar} become \begin{equation}\label{eq:bte0f} \frac{\partial E_{\epsilon}}{\partial t} + {\nabla} \cdot \left( E_{\epsilon} {\bf v} \right) - {\nabla} \cdot (D_\epsilon {\nabla} E_{\epsilon}) - \epsilon \frac{\partial}{\partial \epsilon} \left\{ ({\mathsf X}_{\epsilon} E_\epsilon): {\nabla} {\bf v} \right\} = {\mathbb S}_{\epsilon}, \end{equation} \begin{equation}\label{eq:bte0barf} \frac{\partial \bar{E}_{\epsilon}}{\partial t} + {\nabla} \cdot \left( \bar{E}_{\epsilon} {\bf v} \right) - {\nabla} \cdot (\bar{D}_\epsilon {\nabla} \bar{E}_{\epsilon}) - \epsilon \frac{\partial}{\partial \epsilon} \left\{ (\bar{{\mathsf X}}_{\epsilon} \bar{E}_\epsilon): {\nabla} {\bf v} \right\} = \bar{{\mathbb S}}_{\epsilon}. \end{equation} Swesty and Myra note, these equations are not in conservative form. They opt to monitor conservation of lepton number and energy after the fact. The degree to which they achieve either was not documented. Their equations are operator split as follows (written here for just the neutrinos, not the antineutrinos): \begin{equation} \left\lbrack\!\!\!\left\lbrack \frac{ \partial E_\epsilon }{\partial t}\right\rbrack\!\!\!\right\rbrack_{\rm total} = \left\lbrack\!\!\!\left\lbrack \frac{ \partial E_\epsilon }{\partial t}\right\rbrack\!\!\!\right\rbrack_{\rm advection} + \left\lbrack\!\!\!\left\lbrack \frac{ \partial E_\epsilon }{\partial t}\right\rbrack\!\!\!\right\rbrack_\text{diff-coll}, \end{equation} where \begin{equation} \left\lbrack\!\!\!\left\lbrack \frac{ \partial E_\epsilon }{\partial t}\right\rbrack\!\!\!\right\rbrack_{\rm advection} = -{\nabla} \cdot (E_\epsilon {\bf v}) \label{eq:nu-advect}, \end{equation} \begin{equation} \left\lbrack\!\!\!\left\lbrack \frac{ \partial E_\epsilon }{\partial t}\right\rbrack\!\!\!\right\rbrack_\text{diff-coll} = {\nabla} \cdot (D_\epsilon {\nabla} E_{\epsilon}) + \epsilon \frac{\partial}{\partial \epsilon} \left\{ ({\mathsf X}_{\epsilon} E_\epsilon): {\nabla} {\bf v} \right\} + {\mathbb S}_{\epsilon}. \label{eq:nu-diff} \end{equation} For the purpose of describing their numerical method used to treat each of the operator split equations shown above, Swesty and Myra note, first, that the advection equations take the general form \begin{equation} \label{eq:diff_ad_sc} \left\lbrack\!\!\!\left\lbrack \frac{ \partial \psi}{\partial t} \right\rbrack\!\!\!\right\rbrack_{\rm advection} + {\nabla} \cdot \left( \psi {\bf v} \right) = 0, \end{equation} where $\psi$ is the scalar field ($E_\epsilon$ and $\bar{E}_\epsilon$) being advected. They then deploy the ZEUS consistent advection scheme of \citet{StNo92} in a directionally-split manner to each dimension (in their case, $x_1$ and $x_2$) of the problem. For the $x_1$ update, Eq.~\eqref{eq:diff_ad_sc} is discretized as follows: \begin{eqnarray} \label{eq:sflux_comb_x1} \lefteqn{ \frac{ \left[ \Delta V \right]_{i+(1/2),j+(1/2)}}{\Delta t} \left( \left[ \psi \right]^{n+\beta}_{i+(1/2),j+(1/2)} - \left[ \psi \right]^{n+\alpha}_{i+(1/2),j+(1/2)} \right) = } \nonumber \\ & & - \left( \left[ F_1(\psi) \right]_{i+1,j+(1/2)}^{n+\alpha} \left[ \Delta A_1 \right]_{i+1,j+(1/2)} - \left[ F_1(\psi) \right]_{i,j+(1/2)}^{n+\alpha} \left[ \Delta A_1 \right]_{i,j+(1/2)} \right). \end{eqnarray} The fluxes in Eq.~\eqref{eq:sflux_comb_x1} are given by \begin{equation} \left[ F_1(\psi) \right]_{i,j+(1/2)} = \left[ {\cal I}_1\left(\frac{\psi}{\rho}\right) \right]_{i,j+(1/2)} \left[ F_1(\rho) \right]_{i,j+(1/2)}, \label{eq:nca1} \end{equation} where \begin{equation}\label{eq:flux1} \left[ F_{1}(\rho) \right]_{i,j+(1/2)} = \left[ {\cal I}_1(\rho) \right]_{i,j+(1/2)} \left[ {\upsilon}_{1} \right]_{i,j+(1/2)}, \end{equation} and where \begin{equation} \label{eq:i1} \left[ {\cal I}_1(\psi) \right]_{i,j+(1/2)} = \begin{cases} \displaystyle{ \left[ \psi \right]_{i-(1/2),j+(1/2)} + \left[ \delta_1(\psi) \right]_{i-(1/2),j+(1/2)} \left( 1 - \frac{\left[ {\upsilon}_{1}\right]_{i,j+(1/2)} \Delta t } {\left[x_1\right]_{i} - \left[x_1\right]_{i-1} } \right) } & \text{if} \; \left[ {\upsilon}_1 \right]_{i,j+(1/2)} > 0, \vspace{0.1in} \\ \displaystyle{ \left[ \psi \right]_{i+(1/2),j+(1/2)} - \left[ \delta_1(\psi) \right]_{i+(1/2),j+(1/2)} \left( 1 + \frac{ \left[ {\upsilon}_{1} \right]_{i,j+(1/2)} \Delta t} {\left[x_1\right]_{i+1} - \left[x_1\right]_{i} } \right) } & \text{if} \; \left[ {\upsilon}_1 \right]_{i,j+(1/2)} < 0. \\ \end{cases} \end{equation} In Eq.~\eqref{eq:flux1}, $\rho$ is the fluid mass density. ${\cal I}_1(\psi)$ is the van Leer monotonic upwind advection function \citep{vanLeer1977}, given by \begin{equation} \label{eq:i1p1} \left[ \delta_1(\psi) \right]_{i+(1/2),j+(1/2)} = \begin{cases} \displaystyle{ \frac{ \left[ \Delta \psi \right]_{i,j+(1/2)} \left[ \Delta \psi \right]_{i+1,j+(1/2)} } {\left[ \psi \right]_{i+(3/2),j+(1/2)} - \left[ \psi \right]_{i-(1/2),j+(1/2)}} } \vspace{0.1in} \\ \hspace{1.5in} \text{if} \; \left[ \Delta \psi \right]_{i,j+(1/2)} \left[ \Delta \psi \right]_{i+1,j+(1/2)} > 0, \vspace{0.25in} \\ 0 \hspace{1.4in} \text{otherwise}, \end{cases} \end{equation} where \begin{equation} \label{eq:delsig1} \left[ \Delta \psi \right]_{i,j+(1/2)} = \left[ \psi \right]_{i+(1/2),j+(1/2)} - \left[ \psi \right]_{i-(1/2),j+(1/2)}. \end{equation} The $x_2$ update is computed in the same way, with the obvious substitutions. The remaining term, due to neutrino diffusion, relativistic effects, and collisions: \begin{equation} \label{eq:dc-e} \left\lbrack\!\!\!\left\lbrack \frac{\partial (^eE_{\epsilon})}{\partial t} \right\rbrack\!\!\!\right\rbrack_{\rm diff-coll} - {\nabla} \cdot \left(^eD_\epsilon ^eE_{\epsilon} \right) - \epsilon \frac{\partial}{\partial \epsilon} \left( ^e{\mathsf P}_{\epsilon}: {\nabla} {\bf v} \right) - ^e{\mathbb S}_{\epsilon} = 0 \end{equation} is differenced implicitly in time and as follows in phase space: \begin{eqnarray} \label{eq:nu_tr_fd} \frac{ \left[ E_\epsilon \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} - \left[E_\epsilon\right]^n_{k+(1/2),i+(1/2),j+(1/2)}}{\Delta t}- \left[{\nabla} \cdot D_\epsilon \nabla E_{\epsilon} \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} \nonumber \\ -\left[ \epsilon \frac{\partial \left({\mathsf P}_{\epsilon}: {\nabla} {\bf v}\right)}{\partial \epsilon} \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} -\left[{\mathbb S}_{\epsilon} \right]^{n+1}_{k+(1/2),i+(1/2),j+)1/2)} = 0 , \nonumber \\ \end{eqnarray} where \begin{eqnarray} \label{eq:divfdiff} \lefteqn{ \left[{\nabla} \cdot D_\epsilon \nabla E_{\epsilon} \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} \equiv } \nonumber \\ & & \frac{1}{ \left[ g_2 \right]_{i+(1/2)} \left[ g_{31} \right]_{i+(1/2)} \left[ g_{32} \right]_{j+(1/2)}} \Biggl\{ \frac{1}{ \left[ x_1 \right]_{i+(3/2)} - \left[ x_1 \right]_{i+(1/2)}} \Biggr. \nonumber \\ & & \quad \quad \Biggl. \times \left( \left[ g_2 \right]_{i+1} \left[ g_{31} \right]_{i+1} \left[ g_{32} \right]_{j+(1/2)} \left [D_\epsilon(x_1) \right]^{n+t}_{k+(1/2),i+1,j+(1/2)} \Biggr. \right. \nonumber \\ & & \quad \quad \; \; \; \Biggl. \left. \times \frac{ \left[ E_\epsilon \right]^{n+1}_{k+(1/2),i+(3/2),j+(1/2)} - \left[ E_\epsilon \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)}} { \left[ x_1 \right]_{i+(3/2)} - \left[ x_1 \right]_{i+(1/2)}} \Biggr. \right. \nonumber \\ & & \quad \quad \; \; \; - \Biggl. \left. \left[ g_2 \right]_{i} \left[ g_{31} \right]_{i} \left[ g_{32} \right]_{j+(1/2)} \left[ D_\epsilon(x_1) \right]^{n+t}_{k+(1/2),i,j+(1/2)} \Biggr. \right. \nonumber \\ & & \quad \quad \; \; \; \Biggl. \left. \times \frac{ \left[ E_\epsilon \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} - \left[ E_\epsilon \right]^{n+1}_{k+(1/2),i-(1/2),j+(1/2)}} { \left[ x_1 \right]_{i+(1/2)} - \left[ x_1 \right]_{i-(1/2)}} \right) \Biggr. \nonumber \\ & & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \Biggl. + \frac{1}{ \left[ x_2 \right]_{j+(3/2)} - \left[ x_2 \right]_{j+(1/2)}} \Biggr. \nonumber \\ & & \quad \quad \Biggl. \times \left( \frac{ \left[ g_{31} \right]_{i+(1/2)} \left[ g_{32} \right]_{j+1}} { \left[ g_2 \right]_{i+(1/2)}} \left[ D_\epsilon(x_2) \right]^{n+t}_{k+(1/2),i+(1/2),j+1} \Biggr. \right. \nonumber \\ & & \quad \quad \; \; \; \Biggl. \left. \times \frac{ \left[ E_\epsilon \right]^{n+1}_{k+(1/2),i+(1/2),j+(3/2)} - \left[ E_\epsilon \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)}} {\left[ x_2 \right]_{j+(3/2)} - \left[ x_2 \right]_{j+(1/2)}} \Biggr. \right. \nonumber \\ & & \quad \quad \; \; \; \Biggl. \left. - \frac{ \left[ g_{31} \right]_{i+(1/2)} \left[ g_{32} \right]_{j}} { \left[ g_2 \right]_{i+(1/2)}} \left[ D_\epsilon(x_2) \right]^{n+t}_{k+(1/2),i+(1/2),j} \Biggr. \right. \nonumber \\ & & \quad \quad \; \; \; \Biggl. \left. \times \frac{ \left[ E_\epsilon \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} - \left[ E_\epsilon \right]^{n+1}_{k+(1/2),i+(1/2),j-(1/2)}} { \left[ x_2 \right]_{j+(1/2)} - \left[ x_2 \right]_{j-(1/2)}} \right) \Biggr\} \end{eqnarray} and \begin{eqnarray} \label{eq:pv_diff} \lefteqn{ \left[ \epsilon \frac{\partial \left({\mathsf P}_{\epsilon}: {\nabla} {\bf v}\right)}{\partial \epsilon} \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} \equiv \frac{\left[ \epsilon \right]_{k+(1/2)}} {\left[ \epsilon \right]_{k+1} - \left[ \epsilon \right]_k} } \nonumber \\ & & \times \biggl( \left[ \mathsf X_{\epsilon}:{\nabla} {\bf v} \right]^{n+t}_{k+(3/2),i+(1/2),j+(1/2)} \left[ E_\epsilon \right]^{n+1}_{k+(3/2),i+(1/2),j+(1/2)} \nonumber \\ & & - \left[ \mathsf X_{\epsilon}:{\nabla} {\bf v} \right]^{n+t}_{k-(1/2),i+(1/2),j+(1/2)} \left[ E_\epsilon \right]^{n+1}_{k-(1/2),i+(1/2),j+(1/2)} \biggl) \end{eqnarray} and \begin{eqnarray} \label{eq:nu_tr_src_trm} \lefteqn{ \left[{\mathbb S}_{\epsilon} \right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} \equiv } \nonumber \\ & & -\left[ S_\epsilon \right]^{n+t}_{k+(1/2),i+(1/2),j+(1/2)} \left( 1 + \frac{\eta\alpha}{\left( \left[\epsilon \right]_{k+(1/2)}\right)^3} \left[E_\epsilon\right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} \right) \nonumber \\ & & + c \left[\kappa^a_\epsilon\right]^{n+t}_{k+(1/2),i+(1/2),j+(1/2)} \left[E_\epsilon\right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} \nonumber \\ & & - \left( 1 + \frac{\eta\alpha}{\left( \left[\epsilon \right]_{k+(1/2)}\right)^3} \left[E_\epsilon\right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} \right) \left[ \epsilon \right]_{k+(1/2)} \nonumber \\ & & \times \sum_{\ell=0}^{N_g-1} \left[\Delta\epsilon\right]_{\ell+(1/2)} \left[G\right]^{n+t}_{k+(1/2),\ell+(1/2),i+(1/2),j+(1/2)} \left( 1 + \frac{\eta\alpha}{\left( \left[\epsilon \right]_{\ell+(1/2)}\right)^3} \left[\bar{E}_\epsilon\right]^{n+1}_{\ell+(1/2),i+(1/2),j+(1/2)} \right) \nonumber \\ & & - c\left( 1 + \frac{\eta\alpha}{\left( \left[\epsilon \right]_{k+(1/2)}\right)^3} \left[E_\epsilon\right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} \right) \nonumber \\ & & \times \sum_{\ell=0}^{N_g-1} \left[\Delta\epsilon\right]_{\ell+(1/2)} \left[\kappa^s\right]^{n+t}_{k+(1/2),\ell+(1/2),i+(1/2),j+(1/2)} \left[E_\epsilon\right]^{n+1}_{\ell+(1/2),i+(1/2),j+(1/2)} \nonumber \\ & & + c\left[E_\epsilon\right]^{n+1}_{k+(1/2),i+(1/2),j+(1/2)} \nonumber \\ & & \times \sum_{\ell=0}^{N_g-1} \left[\Delta\epsilon\right]_{\ell+(1/2)} \left[\kappa^s\right]^{n+t}_{\ell+(1/2),k+(1/2),i+(1/2),j+(1/2)} \left( 1 + \frac{\eta\alpha}{\left( \left[\epsilon \right]_{\ell+(1/2)}\right)^3} \left[E_\epsilon\right]^{n+1}_{\ell+(1/2),i+(1/2),j+(1/2)} \right) . \nonumber \\ \end{eqnarray} In Eq.~\eqref{eq:divfdiff}, the factors $g_2$, $g_{31}$, and $g_{32}$ derive from the 3-covariant form of the spatial metric used by Swesty and Myra, which is given by \begin{equation}\label{eq:3metric} ds^2 = (g_1)^2 dx_1^2 + (g_2)^2 dx_2^2 + (g_{31} g_{32})^2 dx_3^2 \end{equation} and is written to accommodate Cartesian, cylindrical, and spherical coordinates. In Eq.~\eqref{eq:nu_tr_src_trm}, $\kappa^a$ and $\kappa^s$ are the absorption and scattering opacities, respectively, and $G(\epsilon ,\epsilon^{'})$ is the pair annihilation kernel. The factors $\alpha$ and $\eta$ are constants. $N_g$ is the number of energy groups, and the superscript $n+t$, with $t$ taking on different values, designates the update stages for the electron, muon, and tau neutrino distributions in the overall update scheme used by Swesty and Myra, shown in their Figure 3. To solve Eq.~\eqref{eq:nu_tr_fd} and its counterpart for antineutrinos, simultaneously, given their coupling, Swesty and Myra implement the Newton--BiCGSTAB subclass of Newton--Krylov iterative methods. Eq.~\eqref{eq:nu_tr_fd} and its antineutrino counterpart are first linearized. BiCGSTAB is used for a solution to the resultant ``inner'' linear system of equations for the updates to the iterates of the outer Newton iteration. Once the quantities $^l\mathbb S_\epsilon$, where $\ell$ denotes neutrino flavor, are known from the solution of Eq.~\eqref{eq:nu_tr_fd} and its counterparts for heavy-flavor neutrinos, Swesty and Myra then update the fluid electron fraction and energy density using the following operator split equations: \begin{equation} \label{eq:ye-source} \left\lbrack\!\!\!\left\lbrack \frac{\partial n_e}{\partial t}\right\rbrack\!\!\!\right\rbrack_{\rm collision} = - \int \frac{1}{\epsilon} \left( ^e{\mathbb S}_\epsilon - ^e\bar{{\mathbb S}}_\epsilon \right) d\epsilon, \end{equation} \begin{equation} \left\lbrack\!\!\!\left\lbrack \left(\frac{\partial E}{\partial t} \right)\right\rbrack\!\!\!\right\rbrack_{\rm collision-\ell} = - \int \left({ ^\ell{\mathbb S}_\epsilon + ^\ell\bar{{\mathbb S}}_\epsilon}\right) d\epsilon, \label{eq:e-coll-F} \end{equation} where $n_e$ is the electron number density and $E$ is the matter energy density. Equation~\eqref{eq:e-coll-F} is solved in operator split fashion for each flavor. The discretizations for Eqs.~\eqref{eq:ye-source} and \eqref{eq:e-coll-F} for electron-neutrino flavor neutrinos (where both lepton number and energy are exchanged) are: \begin{equation} \label{eq:n_xch_fd} \left[n_e \right]^{n+1}_{i+(1/2),j+(1/2)} = \left[n_e \right]^{n+b}_{i+(1/2),j+(1/2)} - \Delta t \sum_{\ell=0}^{N_g-1} \left[\Delta\epsilon\right]_{\ell+(1/2)} \left( \frac{ \left[^e{\mathbb S}_\epsilon\right]^{n+b}_{i+(1/2),j+(1/2)} - \left[^e\bar{{\mathbb S}}_\epsilon\right]^{n+b}_{i+(1/2),j+(1/2)}} {\left[\epsilon\right]_{\ell+(1/2)}} \right), \end{equation} \begin{equation} \label{eq:e_xch_fde} \left[E \right]^{n+d}_{i+(1/2),j+(1/2)} = \left[E \right]^{n+b}_{i+(1/2),j+(1/2)} - \Delta t \sum_{\ell=0}^{N_g-1} \left[\Delta\epsilon\right]_{\ell+(1/2)} \left( { \left[^e{\mathbb S}_\epsilon\right]^{n+c}_{i+(1/2),j+(1/2)} - \left[^e\bar{{\mathbb S}}_\epsilon\right]^{n+c}_{i+(1/2),j+(1/2)}} \right). \end{equation} In a similar manner, the neutrino--matter momentum exchanged is computed. \subsubsection{General-relativistic, finite-difference implementation} A general relativistic implementation of MGFLD was developed by \citet{RaJuJa19}. They begin with the 3+1 metric: \begin{eqnarray} \mathrm{d}s^2 & \equiv & g_{ab} \mathrm{d}x^a \mathrm{d}x^b \nonumber \\ & = & - \alpha^2 \mathrm{d}t^2 + \gamma_{ij} (\mathrm{d}x^i + \beta^i \mathrm{d}t)(\mathrm{d}x^j + \beta^j \mathrm{d}t)~, \label{eq:gr_linele} \end{eqnarray} and the following definitions of the comoving-frame spectral neutrino energy density, momentum density, and stress tensor: \begin{eqnarray} \mathcal{J}(x^{\mu},\epsilon) & \equiv & \epsilon^3 \int f(x^{\mu},p^{\hat \mu})~\mathrm{d}\Omega~, \nonumber \\ \mathcal{H}^{\hat i}(x^{\mu},\epsilon) & \equiv & \epsilon^3 \int l^{\hat i} f(x^{\mu},p^{\hat \mu})~\mathrm{d}\Omega~, \nonumber \\ \mathcal{K}^{\hat i \hat j}(x^{\mu},\epsilon) & \equiv & \epsilon^3 \int l^{\hat i} l^{\hat j} f(x^{\mu},p^{\hat \mu})~\mathrm{d}\Omega~, \label{eq:tr_moment} \end{eqnarray} respectively. $p^{\hat \mu}\equiv \epsilon (1,l^{\hat i})$ denotes the comoving-frame, momentum-space coordinates. $l^{\hat i}$ is a unit comoving-frame, momentum-space three-vector. With these definitions and choice of phase-space coordinates, Rahman et~al.\ express the evolution equation for the comoving-frame neutrino energy density as given by Eq.~\eqref{eq:spectralLagrangianEnergyEquationFLD_3p1} in Sect.~\ref{sec:oneMomentKinetics}. Given the approximations discussed there, the neutrino energy density equation solved by Rahman et~al.\ becomes \begin{eqnarray} &&\frac{1}{\alpha} \frac{\partial}{\partial t} (W \mathcal{\hat J}) + \frac{1}{\alpha} \frac{\partial}{\partial x^j} [\alpha W (v^j-\beta^j/\alpha) \mathcal{\hat J}] \nonumber \\ &&- \frac{1}{\alpha} \frac{\partial}{\partial x^j} \Big[\alpha^{-2} \sqrt{\gamma} \Big\{ \gamma^{i k} + W \Big(\frac{W}{W+1}v^j-\beta^j/\alpha \Big) v^k \Big\} D \partial_k (\alpha^3 \mathcal{J}) \Big] \nonumber \\ && - \frac{e^{k \hat i}}{\alpha^4} \frac{\partial}{\partial t} (W \sqrt{\gamma} \bar v_{\hat i}) D \partial_k(\alpha^3 \mathcal{J}) +\hat{R}_\epsilon - \frac{\partial}{\partial \epsilon} (\epsilon \hat{R}_\epsilon) \nonumber \\ &&= \kappa_\mathrm{a} (\mathcal{\hat J}^{eq}-\mathcal{\hat J})~, \label{eq:tr_fld_energy_eqn} \end{eqnarray} where the relation $e^{\hat{j}}_{\hspace{2pt}\hat{i}}e^{k\hat{i}}=\gamma^{jk}$ was used. Rahman et~al.\ divide the numerical update into three steps, operator splitting Eq.~\eqref{eq:tr_fld_energy_eqn} into the source term, the radial and spectral shift terms, and the nonradial terms. In step 1, the focus is on the source term, and the corresponding terms in the matter specific internal energy and electron fraction equations. The set of equations to be solved is given by \begin{eqnarray} \frac{W}{\alpha} \partial_t \mathcal{J}_{\nu,\xi} &=& \bigg[ \kappa_\mathrm{a} (\mathcal{J}^{\mathrm{eq}} - \mathcal{J}) \bigg]_{\nu,\xi}, \nonumber \\ \frac{W}{\alpha} \rho \partial_t e (T,Y_\mathrm{e}) &=& - \sum_{\nu,\xi} \bigg[ \kappa_\mathrm{a} (\mathcal{J}^{\mathrm{eq}} - \mathcal{J} ) \Delta \epsilon_\xi \bigg]_{\nu,\xi}, \nonumber \\ \frac{W}{\alpha} \rho \partial_t Y_\mathrm{e} &=& - m_u \sum_{\xi} \bigg[ \big[\kappa_\mathrm{a} (\mathcal{J}^{\mathrm{eq}} - \mathcal{J}) \Delta \epsilon_\xi \big]_{\nu_\mathrm{e}} \nonumber \\ &&- \big[\kappa_\mathrm{a} (\mathcal{J}^{\mathrm{eq}} - \mathcal{J}) \Delta \epsilon_\xi \big]_{\bar \nu_\mathrm{e}} \bigg]_{\xi}, \label{eq:ns_source_term} \end{eqnarray} where $\nu$ and $\xi$ indicate the neutrino species and energy bin, respectively, and $\Delta \epsilon_\xi$ the energy bin width. $m_{u}$ is the atomic mass unit. These equations are differenced fully implicitly in time and solved using Newton--Raphson iteration. Linearization of the equations in $\mathcal{J}_{\nu,\xi}$, $e$, and $Y_e$ leads to a system of linear equations that must be solved for each iteration. To do so, Rahman et~al.\ use a direct (LAPACK) solver. The quantities $\rho$, $\alpha$, $W$, and $\kappa_a$ are all held constant during the Newton--Raphson procedure. In step 2, the following equation is solved: \begin{eqnarray} &&W\partial_t \mathcal{\hat J} + \mathcal{R}_r = 0~, \label{eq:2nd_step_equation} \end{eqnarray} where \begin{eqnarray} && \mathcal{R}_r \equiv \partial_t (W) \mathcal{\hat J} + \partial_r [\alpha W(v^r-\beta^r \alpha^{-1})\mathcal{\hat J}] \nonumber \\ &&- \partial_r \Big[ \alpha^{-2} \sqrt{\gamma} \Big\{ \gamma^{rr} + W \Big(\frac{W}{W+1}v^r-\beta^r \alpha^{-1}\Big) v^r \Big\} D_1 \partial_r(\alpha^3 \mathcal{J}) \Big] \nonumber \\ &&- \alpha^{-3} e^{r \hat i} \partial_t (W \sqrt{\gamma} \bar v_{\hat i})D_1 \partial_r (\alpha^3 \mathcal{J}) + \alpha \Big[\hat R_\epsilon - \frac{\partial}{\partial \epsilon} (\epsilon \hat R_\epsilon) \Big]~ \label{eq:tr_rad_red_term} \end{eqnarray} includes radial advection, diffusion, and acceleration, as well as spectral shifts. $D_1$ denotes the radial diffusion coefficient. Equation~(\ref{eq:2nd_step_equation}) is solved using the Crank--Nicolson scheme: \begin{eqnarray} && (W \sqrt{\gamma}) \frac{\mathcal{J}^{n+1}_i-\mathcal{J}^{n}_i}{\Delta t} = -\frac{1}{2} (\mathcal{R}^{n+1}_{r,i}+\mathcal{R}^{n}_{r,i})~. \label{eq:tr_rad_red_term_discrete} \end{eqnarray} All gravitation and hydrodynamics variables are kept fixed during transport updates. $\mathcal{R}^{n}_{r,i}$ on the right-hand side of equation (\ref{eq:tr_rad_red_term_discrete}) is evaluated at both $t^n$ and $t^{n+1}$. For $t^{n+1}$, Rahman et~al.\ provide the following discretizations. The diffusion term is discretized as \begin{eqnarray} &&\Big[ \partial_r \{A^r D_1 \partial_r(\alpha^3 \mathcal{J})\} \Big]^{n+1}_{i} = \nonumber \\ &&\frac{1}{\Delta r} \Big[A^{r}_{i+1/2}D^{n}_{1,i+1/2} \frac{\alpha^3_{i+1} \mathcal{J}^{n+1}_{i+1}-\alpha^3_{i} \mathcal{J}^{n+1}_{i}}{\Delta r} \nonumber \\ &&- A^{r}_{i-1/2}D^{n}_{1,i-1/2} \frac{\alpha^3_{i} \mathcal{J}^{n+1}_i-\alpha^3_{i-1} \mathcal{J}^{n+1}_{i-1}}{\Delta r} \Big]~, \label{eq:tr_rad_red_diff_discrete0} \end{eqnarray} where \begin{eqnarray} A^r &\equiv& \alpha^{-2} \sqrt{\gamma} \Big\{ \gamma^{rr} + W \Big(\frac{W}{W+1}v^r-\beta^r \alpha^{-1}\Big) v^r \Big\}~. \label{eq:tr_rad_red_diff_discrete} \end{eqnarray} $i-1/2$ and $i+1/2$ denote the left and right zone edges for zone $i$, respectively. Values of the gravity and hydrodynamics variables at zone edges are determined by linear interpolation of their zone-center counterparts. The fluid acceleration term is discretized as \begin{eqnarray} &&\Big[B^r D_{1} \partial_r(\alpha^3 \mathcal{J})\Big]^{n+1}_{i} = \nonumber \\ &&\frac{B^{r}_i}{2} \Big[ D^{n}_{1,i+1/2} \frac{\alpha^3_{i+1} \mathcal{J}^{n+1}_{i+1}-\alpha^3_{i}\mathcal{J}^{n+1}_{i}}{\Delta r} \nonumber \\ &&+ D^{n}_{1,i-1/2} \frac{\alpha^3_{i}\mathcal{J}^{n+1}_i-\alpha^3_{i-1}\mathcal{J}^{n+1}_{i-1}}{\Delta r} \Big]~, \label{eq:tr_rad_red_aber_discrete} \end{eqnarray} where \begin{eqnarray} B^r &\equiv& \alpha^{-3} e^{r \hat i} \partial_t(W \sqrt{\gamma} \bar v_{\hat i})~. \label{eq:tr_rad_red_aber_discrete_B} \end{eqnarray} The metric and hydrodynamics variables before and after the metric and hydrodynamics updates are used to evaluate the time derivative in equation (\ref{eq:tr_rad_red_aber_discrete_B}). The advection term is discretized in an upwind fashion as \begin{align} \Big[ \partial_r (C^r {\mathcal{J}}) \Big]^{n+1}_{i} &=& \frac{1}{\Delta r} \Big[C^{r}_{i+1/2} \mathcal{J}^{n+1}_{\iota(i+1/2)} - C^{r}_{i-1/2} \mathcal{J}^{n+1}_{\iota(i-1/2)} \Big]~, \label{eq:tr_rad_red_adv_discrete} \end{align} where \begin{eqnarray} C^r &\equiv& \alpha \sqrt{\gamma} W(v^r-\beta^r \alpha^{-1}) \label{eq:tr_rad_red_adv_discrete_C} \end{eqnarray} and \begin{eqnarray} \iota(i+1/2) &\equiv& \begin{cases} i, & \text{if } v^r_{i+1/2} > 0 ~ ,\\ i+1, & \text{otherwise}~. \end{cases} \label{eq:tr_rad_red_adv_discrete_jota} \end{eqnarray} Spectral shifts---the last term in equation (\ref{eq:tr_rad_red_term})---are discretized using the number-conservative scheme of \citet{MuJaDi10} discussed in Sect.~\ref{sec:EnergyDiscretization}. The flux factor, $f^{\hat{i}}$, and the Eddington tensor, $\chi^{\hat{i}\hat{j}}$, are used to replace $\mathcal{H}^{\hat{i}}$ and $\mathcal{K}^{\hat{i}\hat{j}}$ by $f^{\hat{i}}\mathcal{I}$ and $\chi^{\hat{i}\hat{j}}\mathcal{I}$, respectively. In evaluating the spectral shift terms, both the flux factor and the Eddington tensor are evaluated at $t^n$, whereas the energy density, $\mathcal{I}$, is evaluated at $t^{n+1}$. The remaining advection and diffusion terms are included in the last transport step, encapsulated in the equation \begin{align} &&W \sqrt{\gamma} \partial_t (\mathcal{J}) = \mathcal{R}(\mathcal{J})~, \label{eq:3rd_step_equation} \end{align} where \begin{eqnarray} &&\mathcal{R}(\mathcal{J}) \equiv -\partial_\theta [\alpha W (v^\theta-\beta^\theta \alpha^{-1}) \mathcal{\hat J}] - \partial_\phi [\alpha W (v^\phi-\beta^\phi \alpha^{-1}) \mathcal{\hat J}] \nonumber \\ &&+ \partial_\theta \Big[ \alpha^{-2} \sqrt{\gamma} \Big\{ \gamma^{\theta \theta} + W \Big(\frac{W}{W+1}v^\theta-\beta^\theta \alpha^{-1}\Big) v^\theta \Big\} D_{2} \partial_\theta(\alpha^3 \mathcal{J}) \Big] \nonumber \\ &&+ \partial_\phi \Big[ \alpha^{-2} \sqrt{\gamma} \Big\{ \gamma^{\phi \phi} + W \Big(\frac{W}{W+1}v^\phi-\beta^\phi \alpha^{-1}\Big) v^\phi \Big\} D_{3} \partial_\phi(\alpha^3 \mathcal{J}) \Big] \nonumber \\ &&+ \alpha^{-3} e^{\theta \hat i} \partial_t(W \sqrt{\gamma} \bar v_{\hat i}) D_2 \partial_\theta(\alpha^3 \mathcal{J}) \nonumber \\ &&+ \alpha^{-3} e^{\phi \hat i} \partial_t(W \sqrt{\gamma} \bar v_{\hat i}) D_3 \partial_\phi(\alpha^3 \mathcal{J}). \label{eq:ns_lat_term} \end{eqnarray} $D_2$ and $D_3$ are the diffusions coefficients in the $\theta$ and $\phi$ directions, respectively. Equation (\ref{eq:ns_lat_term}) is evolved using one of two explicit methods: Allen--Cheng \citep{AlCh70} and Runge--Kutta--Legendre (RKL2) \citet{MeBaAs12}. The latter method is a conditionally stable method expressly designed for the diffusion equation. In the Allen--Cheng method, a predictor step provides the following partial update: \begin{eqnarray} &&\frac{(W\sqrt{\gamma})}{\Delta t}(\mathcal{J}^{*}_k - \mathcal{J}^{n}_k) = - \frac{1}{2 \Delta y} (F_{k+1} \mathcal{J}^n_{k+1} - F_{k-1} \mathcal{J}^n_{k-1}) \nonumber \\ &&+ \frac{1}{\Delta y^2} [E_{k+1/2} (\alpha^3_{k+1} \mathcal{J}^n_{k+1}-\alpha^3_{k} \mathcal{J}^{*}_k) \nonumber \\ &&- E_{k-1/2} (\alpha^3_{k} \mathcal{J}^{*}_k-\alpha^3_{k-1} \mathcal{J}^n_{k-1})]~ \nonumber \\ && + \frac{G_{k}}{2\Delta y} \big[ D_{k+1/2} (\alpha^3_{k+1} \mathcal{J}^{n}_{k+1}-\alpha^3_{k} \mathcal{J}^{*}_{k}) \nonumber \\ &&+ D_{k-1/2} (\alpha^3_{k} \mathcal{J}^{*}_{k}-\alpha^3_{k-1} \mathcal{J}^{n}_{k-1}) \big]~, \label{eq:ns_allen_cheng_pred} \end{eqnarray} which, in turn, is followed by a corrector step that provides the complete update: \begin{eqnarray} &&\frac{(W\sqrt{\gamma})}{\Delta t}(\mathcal{J}^{n+1}_k - \mathcal{J}^{n}_k) = - \frac{1}{2 \Delta y} (F_{k+1} \mathcal{J}^{*}_{k+1} - F_{k-1} \mathcal{J}^{*}_{k-1}) \nonumber \\ &&+ \frac{1}{\Delta y^2} [E_{k+1/2} (\alpha^3_{k+1} \mathcal{J}^{*}_{k+1}-\alpha^3_{k} \mathcal{J}^{n+1}_k) \nonumber \\ &&- E_{k-1/2} (\alpha^3_{k} \mathcal{J}^{n+1}_k-\alpha^3_{k-1} \mathcal{J}^{*}_{k-1})] \nonumber \\ &&+ \frac{G_{k}}{2\Delta y} \big[ D_{k+1/2} (\alpha^3_{k+1} \mathcal{J}^{*}_{k+1}-\alpha^3_{k} \mathcal{J}^{n+1}_{k}) \nonumber \\ &&+ D_{k-1/2} (\alpha^3_{k} \mathcal{J}^{n+1}_{k}-\alpha^3_{k-1} \mathcal{J}^{*}_{k-1}) \big]~, \label{eq:ns_allen_cheng_cor} \end{eqnarray} where \begin{eqnarray} &&E \equiv \alpha^{-2} \sqrt{\gamma} \Big\{ \gamma^{jj} + W \Big(\frac{W}{W+1}v^j-\beta^j \alpha^{-1}\Big) v^j \Big\} D~, \nonumber \\ &&F \equiv \alpha \sqrt{\gamma} W (v^j-\beta^j \alpha^{-1})~, \nonumber \\ &&G \equiv \alpha^{-3} e^{j \hat i} \partial_t(W \sqrt{\gamma} \bar v_{\hat i}). \label{eq:ns_allen_cheng_FE} \end{eqnarray} In equations (\ref{eq:ns_allen_cheng_pred}) and (\ref{eq:ns_allen_cheng_cor}), only one spatial index, $k$, is explicitly shown and represents a zone index in either the $\theta$ or the $\phi$ direction. Moreover, in the discretizations shown, the gridding in the single dimension is assumed to be uniform, with zone width $\Delta y$. In the (s-stage) RKL2 method, which Rahman et~al.\ deploy as a 4-stage method, the update in each of the four stages is given by \begin{eqnarray} \mathcal{J}_{0} &=& \mathcal{J}^{n}~, \nonumber \\ \mathcal{J}_{1} &=& \mathcal{J}_{0} + \frac{2}{27} \frac{\Delta t}{W\sqrt{\gamma}} \mathcal{R}(\mathcal{J}_{0})~, \nonumber \\ \mathcal{J}_{2} &=& \frac{3}{2} \mathcal{J}_{1} - \frac{1}{2} \mathcal{J}_{0} + \frac{\Delta t}{W\sqrt{\gamma}} \Bigg( \frac{1}{3} \mathcal{R}(\mathcal{J}_{1}) - \frac{2}{9} \mathcal{R}(\mathcal{J}_{0}) \Bigg)~, \nonumber \\ \mathcal{J}_{3} &=& \frac{25}{12} \mathcal{J}_{2} - \frac{5}{6} \mathcal{J}_{1} - \frac{1}{4} \mathcal{J}_{0} + \frac{\Delta t}{W\sqrt{\gamma}} \Bigg( \frac{25}{54} \mathcal{R}(\mathcal{J}_{2}) - \frac{25}{81} \mathcal{R}(\mathcal{J}_{0}) \Bigg)~, \nonumber \\ \mathcal{J}_{4} &=& \frac{189}{100} \mathcal{J}_{3} - \frac{81}{80} \mathcal{J}_{2} + \frac{49}{400} \mathcal{J}_{0} \nonumber \\ && + \frac{\Delta t}{W\sqrt{\gamma}} \Bigg( \frac{21}{50} \mathcal{R}(\mathcal{J}_{3}) - \frac{49}{200} \mathcal{R}(\mathcal{J}_{0}) \Bigg)~, \nonumber \\ \mathcal{J}^{n+1} &=& \mathcal{J}_{4}~. \label{eq:ns_runge_kutta_legendre} \end{eqnarray} For the $s$-th stage and zone $k$, $\mathcal{R}(\mathcal{J})$ is discretized as \begin{eqnarray} &&\mathcal{R}_k(\mathcal{J}_s) = - \frac{1}{2 \Delta y} (F_{k+1} \mathcal{J}_{s,k+1} - F_{k-1} \mathcal{J}_{s,k-1}) \nonumber \\ && + \frac{1}{\Delta y^2} (E_{k+1/2} (\alpha^3_{k+1} \mathcal{J}_{s,k+1}-\alpha^3_{k} \mathcal{J}_{s,k}) \nonumber \\ &&- E_{k-1/2} (\alpha^3_{k} \mathcal{J}_{s,k}-\alpha^3_{k-1} \mathcal{J}_{s,k-1})) \nonumber \\ && + \frac{G_{k}}{2\Delta y} \big[ D_{k+1/2} (\alpha^3_{k+1} \mathcal{J}_{s,k+1}-\alpha^3_{k} \mathcal{J}_{s,k}) \nonumber \\ &&+ D_{k-1/2} (\alpha^3_{k} \mathcal{J}_{s,k}-\alpha^3_{k-1} \mathcal{J}_{s,k-1}) \big]. \label{eq:ns_runge_kutta_legendre_R} \end{eqnarray} Finally, it is important to note that Rahman et~al.\ go to great lengths to ensure that their definitions of the diffusion coefficients preserve causality for both the individual and the total radiative fluxes. To accomplish this, they compute the gradient of the energy density as \begin{eqnarray}\label{eq:C.1} & & |\nabla \mathcal{J}|_{i,j,k} \\ & = & \sqrt{\Bigg(\frac{\mathcal{J}_{i+1,j,k}-\mathcal{J}_{i-1,j,k}}{r_{i+1}-r_{i-1}}\Bigg)^2 +\Bigg(\frac{\mathcal{J}_{i,j+1,k}-\mathcal{J}_{i,j-1,k}}{r_i(\theta_{j+1}-\theta_{j-1})}\Bigg)^2 +\Bigg(\frac{\mathcal{J}_{i,j,k+1}-\mathcal{J}_{i,j,k-1}}{r_i\sin{\theta_j}(\phi_{k+1}-\phi_{k-1})}\Bigg)^2} \nonumber \end{eqnarray} and the Knudsen number as \begin{eqnarray}\label{eq:C.2} R_{i,j,k} = \frac{|\nabla \mathcal{J}|_{i,j,k}}{(\kappa_\mathrm{t})_{i,j,k} \mathcal{J}_{i,j,k}}, \end{eqnarray} where $({\kappa_\mathrm{t}})_{i,j,k}$ is the transport opacity at the cell center $(i,j,k)$. Equation~\eqref{eq:limiterLPW} is then used to compute the flux limiter, and the causality-preserving diffusion coefficients are given by \begin{eqnarray}\label{eq:C.3} D_{i,j,k} = \frac{\lambda_{i,j,k}}{(\kappa_\mathrm{t})_{i,j,k}}. \end{eqnarray} Rahman et~al.\ do not report on the conservation of lepton number in their code, but given their use of the method developed by \citet{MuJaDi10}, which is specifically designed to conserve lepton number, it should be quite good. They do report on their conservation of energy. They report a change in total energy of $1.85\times10^{51}$ erg at 60 ms after bounce, most of which results at bounce, and a much more gradual increase between 60 and 525 ms after bounce to their final value of $\Delta E$ of $2.0\times10^{51}$ erg. As discussed in Sect.~\ref{sec:lepenergycons}, their use of the Lagrangian two-moment model as the starting point for their MGFLD implementation does not lend itself to conserving energy, nor does their use of flux-limited diffusion, as discussed in \citet{JuObJa15} and in references cited therein. \subsubsection{Newtonian-gravity, $O(v/c)$, finite-volume implementation} As part of the development of the CASTRO code, \citet{ZhHoAl13} developed a MGFLD solver using finite-volume methods. They express the equations of multigroup radiation hydrodynamics as \begin{align} \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec{u}) = { } & 0, \label{eq:mgrhd-rho} \\ \frac{\partial (\rho \vec{u})}{\partial t} + \nabla \cdot (\rho \vec{u} \vec{u}) + \nabla p + \sum_{g} \lambda_g \nabla E_g = { } & \vec{F}_G, \label{eq:mgrhd-rhou} \\ \frac{\partial (\rho E)}{\partial t} + \nabla \cdot (\rho E \vec{u} + p \vec{u}) + \vec{u} \cdot \sum_{g}\lambda_g \nabla E_g = { } & \sum_{g} c (\kappa_gE_{g}-j_g) + \vec{u}\cdot\vec{F}_G, \label{eq:mgrhd-rhoE} \\ \frac{\partial (\rho Y_e)}{\partial t} + \nabla \cdot (\rho Y_e \vec{u}) = { } & \sum_{g} c \xi_g (\kappa_gE_{g}-j_g) , \label{eq:mgrhd-Ye} \\ \frac{\partial E_g}{\partial t} + \nabla \cdot \left(\frac{3-f_g}{2} E_g \vec{u}\right) - \vec{u} \cdot \nabla \left(\frac{1-f_g}{2} E_g\right) = { } & - c (\kappa_gE_{g}-j_g) + \nabla \cdot \left(\frac{c\lambda_g}{\chi_g} \nabla E_g \right) \label{eq:mgrhd-Eg} \\ + \int_g \frac{\partial}{\partial \nu} \Bigg{[}\left(\frac{1-f}{2} \nabla \cdot \vec{u} + \frac{3f-1}{2} \hat{\vec{n}}\hat{\vec{n}} : \nabla \vec{u} \right) & \nu E_\nu \Bigg{]} \mathrm{d}\nu - \frac{3f_g-1}{2} E_g \hat{\vec{n}}\hat{\vec{n}} : \nabla \vec{u}, \nonumber \end{align} where the group quantities are defined as \begin{equation} E_g = \int_{\nu_{g-1/2}}^{\nu_{g+1/2}} E_\nu \mathrm{d}\nu, \label{eq:Egdef} \end{equation} \begin{equation} \label{eq:emissivity-g} j_g = \frac{4\pi}{c}\eta(\nu_g) \Delta \nu_g, \end{equation} and \begin{equation} \label{eq:xi} \xi_g = s \frac{m_{\mathrm{B}}}{h \nu_g}. \end{equation} In Eq.~\eqref{eq:Egdef}, the neutrino energy density per frequency, $E_\nu$, is integrated over the frequency group defined by the interval $[\nu_{g}-1/2,\nu_{g}+1/2]$ to yield the energy density per group. Eq.~\eqref{eq:emissivity-g} defines the group emissivity in terms of the emissivity, $\eta$, and the group width $\Delta\nu_g=\nu_g+1/2-\nu_g-1/2$. In order of appearance in the equations, the remaining quantities are, $\lambda_g$, $\kappa_g$, and $f_g$, and are defined by evaluating the flux limiter, $\lambda$, the absorption coefficient, $\kappa$, and the Eddington factor, $f$, at a representative group frequency, $\nu_g$---i.e., they are all group-mean values. Finally, for neutrinos, $\xi_g$ is given by Eq.~\eqref{eq:xi}, with $s=+1$ for electron neutrinos and $s=-1$ for electron antineutrinos. Zhang et~al.\ split these equations into three subsets, based on their mathematical characteristics and in an effort to minimize issues arising from operator splitting. There is a hyperbolic subsystem that includes the evolution of the electron fraction (it also includes pieces of the evolution equation for the neutrino energy density, but the neutrino energy density is not evolved using this subsystem, as will be discussed): \begin{align} \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec{u}) = { } & 0, \label{eq:hyper-Eg1} \\ \frac{\partial (\rho \vec{u})}{\partial t} + \nabla \cdot (\rho \vec{u} \vec{u}) + \nabla p + \sum_{g} \lambda_g \nabla E_g = { } & \vec{F}_G , \label{eq:hyper-rhou} \\ \frac{\partial (\rho E)}{\partial t} + \nabla \cdot (\rho E \vec{u} + p \vec{u}) +\vec{u} \cdot \sum_{g} \lambda_g \nabla E_g = { } & \vec{u} \cdot \vec{F}_G , \label{eq:hyper-rhoE} \\ \frac{\partial (\rho Y_e)}{\partial t} + \nabla \cdot (\rho Y_e \vec{u}) = { } & 0, \label{eq:hyper-Ye} \\ \frac{\partial E_g}{\partial t} + \nabla \cdot \left(\frac{3-f_g}{2} E_g \vec{u}\right) - \vec{u} \cdot \nabla \left( \frac{1-f_g}{2} E_g\right) = { } & 0 . \label{eq:hyper-Eg5} \end{align} There is a second set of hyperbolic equations that governs the evolution of the neutrino energy density \emph{sans} the diffusion term and the term that describes the coupling of neutrinos to the matter: \begin{align} \frac{\partial E_g}{\partial t} = { } & -\nabla \cdot (E_g \vec{u}), \label{eq:Eg2} \\ \frac{\partial E_\nu}{\partial t} = { } & \frac{\partial}{\partial \ln{\nu}} \left[ \left(\frac{1-f}{2} \nabla \cdot \vec{u} + \frac{3f-1}{2} \hat{\vec{n}}\hat{\vec{n}} : \nabla \vec{u}\right) E_\nu \right]. \label{eq:fspace2} \end{align} This second set of equations results from a splitting of their equation for the neutrino energy density per frequency, $E_\nu$, prior to integration over group frequencies: \begin{align} \frac{\partial E_{\nu}}{\partial t} + \nabla \cdot (E_{\nu} \vec{u}) = { } & \nabla \cdot \left(\frac{c\lambda}{\chi} \nabla E_{\nu} \right) - (c\kappa E_{\nu} - 4\pi\eta) \nonumber \\ { } & + \frac{\partial}{\partial \ln{\nu}} \left(\frac{1-f}{2} E_\nu \nabla \cdot \vec{u} + \frac{3f-1}{2} E_\nu \hat{\vec{n}}\hat{\vec{n}} : \nabla \vec{u}\right) . \label{eq:fdrhd-Enu} \end{align} Finally, there is a parabolic system of equations that describes the evolution of the neutrino energy density due to the diffusion of neutrinos in the stellar core, as well as the evolution of the matter internal energy and electron fraction as a result of neutrino--matter interactions: \begin{align} \frac{\partial (\rho e)}{\partial t} = { } & \sum_{g} c (\kappa_gE_{g}-j_g), \label{eq:dEg-i1} \\ \frac{\partial (\rho Y_e)}{\partial t} = { } & \sum_{g} c \xi_g (\kappa_gE_{g}-j_g), \label{eq:dEg-i2} \\ \frac{\partial E_g}{\partial t} = { } & -c (\kappa_gE_{g}-j_g) + \nabla \cdot \left(\frac{c\lambda_g}{\chi_g} \nabla E_g \right). \label{eq:dEg-i3} \end{align} The equations in the first hyperbolic subsystem, Eqs.~\eqref{eq:hyper-Eg1} through \eqref{eq:hyper-Eg5}, are solved using an explicit, unsplit, PPM method, with characteristic limiting, full corner coupling, and the approximate Riemann solver of \citet{BeCoTr89}. Given the Godunov states computed, the radiation field energy density is in turn updated via Eq.~\eqref{eq:Eg2}. Finally, Eq.~\eqref{eq:fspace2}, which takes the form of an advection equation in neutrino-energy space, is solved using a second, explicit Godunov method, based on the method of lines. In this explicit part of the update scheme, a third-order, TVD, Runge--Kutta scheme developed by \citet{ShOs88} is used. The parabolic system, Eqs.~\eqref{eq:dEg-i1} through \eqref{eq:dEg-i3}, is instead solved implicitly. Zhang et~al.\ reformulate the equations as \begin{align} F_e = { } & \rho e - \rho e^{-} - \Delta t \sum_g c (\kappa_g E_g - j_g) = 0 , \label{eq:Fe} \\ F_Y = { } & \rho Y_e - \rho Y_e^{-} - \Delta t \sum_g c \xi_g (\kappa_g E_g - j_g) = 0 , \label{eq:FY} \\ F_g = { } & E_g - E_g^{-} - \Delta t\, \nabla \cdot \left(\frac{c\lambda_g}{\chi_g} \nabla E_g \right) + \Delta t\, c (\kappa_g E_g - j_g) = 0 , \end{align} and linearize in $T$, $Y_e$, and $E_g$ to obtain the (outer) linear system \begin{equation} \left[\begin{array}{ccc} ({\partial F_e}/{\partial T})^{(k)} & ({\partial F_e}/{\partial Y_e})^{(k)} & ({\partial F_e}/{\partial E_g})^{(k)} \\[3pt] ({\partial F_Y}/{\partial T})^{(k)} & ({\partial F_Y}/{\partial Y_e})^{(k)} & ({\partial F_Y}/{\partial E_g})^{(k)} \\[3pt] ({\partial F_g}/{\partial T})^{(k)} & ({\partial F_g}/{\partial Y_e})^{(k)} & ({\partial F_g}/{\partial E_g})^{(k)} \end{array} \right] \left[\begin{array}{c} \delta T^{(k+1)}\\ \delta Y_e^{(k+1)}\\ \delta E_g^{(k+1)} \end{array}\right] = \left[\begin{array}{c} - F_e^{(k)}\\ - F_Y^{(k)}\\ - F_g^{(k)} \end{array}\right]. \label{eq:newton} \end{equation} They point out that if the derivatives of the diffusion coefficient, $c\lambda_g/\chi_g$, with respect to $T$, $Y_e$, and $E_g$ are ignored, the linear system of equations collapses to an equation for the $(k+1)$$^{\rm st}$ iterate, $E^{(k+1)}_{g}$: \begin{equation} \begin{split} \left(c\kappa_g + \frac{1}{\Delta t}\right) E_g^{(k+1)} & - \nabla \cdot \left( \frac{c\lambda_g}{\chi_g} \nabla E_g^{(k+1)} \right) = c j_g + \frac{E_g^{-}}{\Delta t} \\ & + H_g \left[c\sum_{g^{\prime}}\left(\kappa_{g^{\prime}} E_{g^{\prime}}^{(k+1)} - j_{g^{\prime}} \right) - \frac{1}{\Delta t} (\rho e^{(k)} - \rho e^{-}) \right] \\ & + \Theta_g \left[c \sum_{g^{\prime}}\xi_{g^{\prime}}\left(\kappa_{g^{\prime}} E_{g^{\prime}}^{(k+1)} - j_{g^{\prime}} \right) - \frac{1}{\Delta t} (\rho Y_e^{(k)} - \rho Y_e^{-}) \right] , \label{eq:MGdiff0} \end{split}\end{equation} where $\lambda_g$, $\kappa_g$, $\chi_g$, and $j_g$ are evaluated at the $k^{\rm th}$ iterate, and where \begin{align} H_g = { } & \left(\frac{\partial j_g}{\partial T} - \frac{\partial \kappa_g}{\partial T} E_g^{(k)} \right) \eta_T - \left(\frac{\partial j_g}{\partial Y_e} - \frac{\partial \kappa_g}{\partial Y_e} E_g^{(k)}\right) \eta_Y ,\\ \Theta_g = { } & \left(\frac{\partial j_g}{\partial Y_e} - \frac{\partial \kappa_g}{\partial Y_e} E_g^{(k)}\right) \theta_Y - \left(\frac{\partial j_g}{\partial T} - \frac{\partial \kappa_g}{\partial T} E_g^{(k)} \right) \theta_T , \end{align} and \begin{align} \eta_T = { } & \frac{c\Delta t}{\Omega} \left[\rho + c \Delta t \sum_g \xi_g \left(\frac{\partial j_g}{\partial Y_e} - \frac{\partial \kappa_g}{\partial Y_e} E_g^{(k)}\right) \right] , \\ \eta_Y = { } & \frac{c\Delta t}{\Omega} \left[c \Delta t \sum_g \xi_g \left(\frac{\partial j_g}{\partial T} - \frac{\partial \kappa_g}{\partial T} E_g^{(k)} \right) \right] , \\ \theta_T = { } & \frac{c\Delta t}{\Omega} \left[ \rho \frac{\partial e}{\partial Y_e} + c \Delta t \sum_g \left(\frac{\partial j_g}{\partial Y_e} - \frac{\partial \kappa_g}{\partial Y_e} E_g^{(k)}\right) \right] , \\ \theta_Y = { } & \frac{c\Delta t}{\Omega} \left[\rho \frac{\partial e}{\partial T} + c\Delta t \sum_g \left(\frac{\partial j_g}{\partial T} - \frac{\partial \kappa_g}{\partial T} E_g^{(k)} \right) \right] , \\ \Omega = { } & \left[\rho \frac{\partial e}{\partial T} + c\Delta t \sum_g \left(\frac{\partial j_g}{\partial T} - \frac{\partial \kappa_g}{\partial T} E_g^{(k)} \right) \right] \left[\rho + c \Delta t \sum_g \xi_g \left(\frac{\partial j_g}{\partial Y_e} - \frac{\partial \kappa_g}{\partial Y_e} E_g^{(k)}\right) \right] \nonumber \\ & - \left[ \rho \frac{\partial e}{\partial Y_e} + c \Delta t \sum_g \left(\frac{\partial j_g}{\partial Y_e} - \frac{\partial \kappa_g}{\partial Y_e} E_g^{(k)}\right) \right] \left[c \Delta t \sum_g \xi_g \left(\frac{\partial j_g}{\partial T} - \frac{\partial \kappa_g}{\partial T} E_g^{(k)} \right) \right], \end{align} all of which are evaluated at the $k^{\rm th}$ iterate. Eq.~\eqref{eq:MGdiff0} couples $E_g$ to its values across all energy groups. To decouple the groups, Zhang et~al.\ choose to use an (inner) iterative procedure by evaluating the right-hand-side at the $k^{\rm th}$ iterate of $E_g$ and iterating the solution of Eq.~\eqref{eq:MGdiff0} to convergence. Once $E^{k+1}_g$ is known, the updates for $\rho e$ and $Y_e$ are determined by \begin{align} \rho e^{(k+1)} = { } & H \rho e^{(k)} + (1-H) \rho e^- + \Theta (\rho Y_e^{(k)} - \rho Y_e^-) \nonumber \\ & + c \Delta t \sum_g\left[(\kappa_gE_g^{\ell+1} - j_g) - (H + \Theta \xi_g) (\kappa_gE_g^{\ell} - j_g)\right] , \label{eq:uprhoe}\\ \rho Y_e^{(k+1)} = { } & \bar{\Theta} \rho Y_e^{(k)} + (1-\bar{\Theta}) \rho Y_e^- + \bar{H} (\rho e^{(k)} - \rho e^-) \nonumber \\ & + c\Delta t \sum_g\left[\xi_g(\kappa_gE_g^{\ell+1} - j_g) - (\bar{H} + \bar{\Theta} \xi_g) (\kappa_gE_g^{\ell} - j_g)\right] , \label{eq:uprhoYe} \end{align} which stem from Eqs.~\eqref{eq:Fe} and \eqref{eq:FY} upon linearization and are conservative for energy and lepton number. In Eqs.~\eqref{eq:uprhoe} and \eqref{eq:uprhoYe}, $H$, $\Theta$, $\bar{H}$, and $\bar{\Theta}$ are defined by \begin{eqnarray} H & = & \sum_g H_g, \\ \Theta & = & \sum_g \Theta_g, \\ \bar{H} & = & \sum_g \xi_g H_g, \\ \bar{\Theta} & = & \sum_g \xi_g \Theta_g. \end{eqnarray} In turn, $T$ is updated, and the next outer iteration is initiated. Zhang et~al.\ deploy the synthetic acceleration scheme of \citet{MoLaMa85,MoYaWa07}, extended in this case by them to neutrino transport, to accelerate convergence of their outer iteration. Note that the system given by Eqs.~\eqref{eq:dEg-i1}-\eqref{eq:dEg-i3} does not include energy coupling interactions (e.g., inelastic scattering). Inclusion of these interactions in a fully implicit solve requires modifications to the solution procedure. The degree to which the approach outlined here conserves lepton number and energy was not documented. \subsection{Structure-preserving methods} \label{sec:structurePreservingMethods} Structure-preserving methods are advanced numerical methods that aim to capture key properties of the underlying, continuous PDEs, and include methods that preserve physical bounds on solutions (e.g., positive distribution functions), achieve asymptotic limits of a multi-scale model (e.g., diffusion limit in radiation transport and steady states), preserve constraints (e.g., the divergence-free condition in magnetohydrodynamics), or conserve secondary quantities (e.g., simultaneous conservation of neutrino number and energy). As such, structure-preserving methods are more faithful to the physics, and often improves accuracy and robustness. The energy conserving discretization of the spherically symmetric Boltzmann equation by \citet{LiMeMe04} discussed in Sect.~\ref{sec:relativisticEffectsAndConservationOfEnergy}, and the number conserving discretization of the energy equation in the Lagrangian two-moment model by \citet{MuJaDi10} discussed in \ref{sec:numericalTwoMomentKinetics} are examples of structure-preserving discretizations already in use in simulations. These aim to preserve secondary quantities that are not evolved directly by the numerical method. Below we discuss discretizations that aim to preserve physical bounds on evolved quantities. \subsubsection{Preamble: discontinuous Galerkin methods} \label{sec:dgPreamble} Since the following subsections employ the discontinuous Galerkin (DG) method, which has yet to be adapted to modeling CCSN, we include a short description of key elements here by considering the scalar conservation law, \begin{equation} \pd{u}{t}+\pd{f(u)}{x} = 0, \label{eq:dg_scalarConservationLaw} \end{equation} with a linear flux $f(u)=a\,u$, where $a$ is a constant in space and time. We refer to \citet{CoSh89,CoLiSh89,CoHoSh90,CoSh91,CoSh98} for pioneering, in-depth expositions on the early development of DG methods. See also \citet{CoSh01,Shu16} for reviews. To solve Eq.~\eqref{eq:dg_scalarConservationLaw}, the computational domain $D$ is divided into a triangulation $\mathcal{T}$ of non-overlapping elements $K=(x_{\textnormal{\tiny\textsc{L}}},x_{\textnormal{\tiny\textsc{H}}})$, so that $D = \cup_{K \in \mathcal{T}}$. On each element, the solution will then be approximated by functions in the approximation space \begin{equation} \mathbb{V}_{h}^{k}=\{\varphi_{h} : \varphi_{h}\big|_{K} \in \mathbb{P}^{k}(K), \, \, \forall\ K\in \mathcal{T} \}, \label{eq:dg_approximationSpace} \end{equation} where $\mathbb{P}^{k}(K)$ denotes the space of one-dimensional polynomials of maximal degree $k$ (e.g., Legendre polynomials). Functions in $\mathbb{V}_{h}^k$ can be discontinuous across element interfaces (hence discontinuous Galerkin). One then writes the approximate solution to Eq.~\eqref{eq:dg_scalarConservationLaw} on element $K$ as the expansion \begin{equation} u_{h}^{K}(x,t) = \sum_{i=1}^{k+1}u_{i}^{K}(t)\,b_{i}^{K}(x), \label{eq:dg_approximation} \end{equation} where the expansion coefficients $u_{i}^{K}$ are the unknowns for which we solve the equations, and $b_{i}^{K}\in\mathbb{V}_{h}^{k}$ are the basis functions. Next, one defines in what sense $u_{h}^{K}$ will approximate $u$, the solution to Eq.~\eqref{eq:dg_scalarConservationLaw}. To this end, the residual \begin{equation} R(u_{h}^{K}) = \pd{u_{h}^{K}}{t}+\pd{f(u_{h}^{K})}{x} \label{eq:dg_residual} \end{equation} is defined, which is required to be orthogonal to all test functions $\varphi_{h}\in\mathbb{V}_{h}^{k}$; i.e., \begin{equation} \int_{K}R(u_{h}^{K})\,\varphi_{h}^{K}\,dx = 0, \quad\forall\varphi_{h}^{K}\in\mathbb{V}_{h}^{k}. \label{eq:dg_ansatz} \end{equation} Inserting Eq.~\eqref{eq:dg_residual} into Eq.~\eqref{eq:dg_ansatz}, and performing an integration by parts on the flux term gives \begin{equation} \int_{K}(\pd{u_{h}^{K}}{t})\,\varphi_{h}^{K}\,dx + \big[\,f(u_{h}^{K})(x_{\textnormal{\tiny\textsc{H}}}^{-})\,\varphi_{h}^{K}(x_{\textnormal{\tiny\textsc{H}}}^{-})-f(u_{h}^{K})(x_{\textnormal{\tiny\textsc{L}}}^{+})\,\varphi_{h}^{K}(x_{\textnormal{\tiny\textsc{L}}}^{+})\,\big] - \int_{K}f(u_{h}^{K})\,\pd{\varphi_{h}^{K}}{x}\,dx = 0, \label{eq:dg_ansatz_weak} \end{equation} where $x_{\textnormal{\tiny\textsc{L}}/\textnormal{\tiny\textsc{H}}}^{\pm}=\lim_{\delta^{+}\to0}x_{\textnormal{\tiny\textsc{L}}/\textnormal{\tiny\textsc{H}}}\pm\delta$. However, the entirely local formulation in Eq.~\eqref{eq:dg_ansatz_weak} is problematic because it does not specify how solutions in adjacent elements interact. In addition, a unique flux must be defined on the element interfaces at $x_{\textnormal{\tiny\textsc{L}}/\textnormal{\tiny\textsc{H}}}$ to recover the conservation statement inherent in Eq.~\eqref{eq:dg_scalarConservationLaw}. To resolve this, the fluxes on the element interfaces are replaced by a unique value, which then gives the semi-discrete DG method in weak form: \textit{Find $u_{h}^{K} \in \mathbb{V}_{h}^{k}$ such that} \begin{equation} \int_{K}(\pd{u_{h}^{K}}{t})\,\varphi_{h}^{K}\,dx + \big[\,\widehat{f(u_{h}^{K})}(x_{\textnormal{\tiny\textsc{H}}})\,\varphi_{h}^{K}(x_{\textnormal{\tiny\textsc{H}}}^{-})-\widehat{f(u_{h}^{K})}(x_{\textnormal{\tiny\textsc{L}}})\,\varphi_{h}^{K}(x_{\textnormal{\tiny\textsc{L}}}^{+})\,\big] - \int_{K}f(u_{h}^{K})\,\pd{\varphi_{h}^{K}}{x}\,dx = 0 \label{eq:dg_semiDiscrete_weak} \end{equation} holds for all $\varphi_{h}\in\mathbb{V}_{h}^{k}$ and all $K\in\mathcal{T}$. In Eq.~\eqref{eq:dg_semiDiscrete_weak}, $\widehat{f(u_{h}^{K})}(x)$ is a unique numerical flux defined on the interface. For the scalar problem considered here, the familiar upwind flux can be used: \begin{equation} \widehat{f(u_{h}^{K})}(x) =\f{1}{2}\,\big(\,f(u_{h}^{K}(x^{-}))+f(u_{h}^{K}(x^{+}))-|a|\,(u_{h}^{K}(x^{+})-u_{h}^{K}(x^{-}))\,\big), \label{eq:dg_upwindFlux} \end{equation} which is defined in terms of approximations to the immediate left and right of $x$, which can be different. Undoing the integration by parts that resulted in Eq.~\eqref{eq:dg_semiDiscrete_weak} gives the semi-discrete DG method in strong form: \textit{Find $u_{h}^{K} \in \mathbb{V}_{h}^{k}$ such that} \begin{align} &\int_{K}R(u_{h}^{K})\,\varphi_{h}^{K}\,dx \label{eq:dg_semiDiscrete_strong} \\ &= \big[\,\big(f(u_{h}^{K}(x_{\textnormal{\tiny\textsc{H}}}^{-}))-\widehat{f(u_{h}^{K})}(x_{\textnormal{\tiny\textsc{H}}})\big)\,\varphi_{h}^{K}(x_{\textnormal{\tiny\textsc{H}}}^{-})-\big(f(u_{h}^{K}(x_{\textnormal{\tiny\textsc{L}}}^{+}))-\widehat{f(u_{h}^{K})}(x_{\textnormal{\tiny\textsc{L}}})\big)\,\varphi_{h}^{K}(x_{\textnormal{\tiny\textsc{L}}}^{+})\,\big], \nonumber \end{align} for all $\varphi_{h}^{K}\in\mathbb{V}_{h}^{k}$ and all $K\in\mathcal{T}$. Here, the weak and the strong formulations (Eqs.~\eqref{eq:dg_semiDiscrete_weak} and \eqref{eq:dg_semiDiscrete_strong}, respectively) are equivalent statements. By comparing the strong formulation with Eq.~\eqref{eq:dg_ansatz}, one sees that the residual in the DG solution is orthogonal to $\varphi_{h}$ only in the convergent limit when $f(u_{h}^{K}(x^{\pm}))\to\widehat{f(u_{h}^{K})}(x)$. In Sections \ref{sec:boundPreserving} and \ref{sec:realizabilityPreserving}, we will only refer to the weak formulation in Eq.~\eqref{eq:dg_semiDiscrete_weak}. To further illustrate how the weak formulation in Eq.~\eqref{eq:dg_semiDiscrete_weak} is used in practice, let \begin{equation} \mathbf{u}^{K}(t) =\big(\,u_{1}^{K}(t),\ldots,u_{k+1}^{K}(t)\,\big)^{T} \quad\text{and}\quad \mathbf{b}^{K}(x) =\big(\,b_{1}^{K}(x),\ldots,b_{k+1}^{K}(x)\,\big)^{T}. \end{equation} Then, by inserting Eq.~\eqref{eq:dg_approximation} into Eq.~\eqref{eq:dg_semiDiscrete_weak}, and letting $\varphi_{h}=b_{j}~(j=1,\ldots,k+1)$, one obtains an equation for the expansion coefficients: \begin{equation} \deriv{\mathbf{u}^{K}}{t} =-(M^{K})^{-1}\,\Big\{\,\big[\,\widehat{f(u_{h}^{K})}(x_{\textnormal{\tiny\textsc{H}}})\,\mathbf{b}^{K}(x_{\textnormal{\tiny\textsc{H}}}^{-})-\widehat{f(u_{h}^{K})}(x_{\textnormal{\tiny\textsc{L}}})\,\mathbf{b}^{K}(x_{\textnormal{\tiny\textsc{L}}}^{+})\,\big] - S^{K}\,\mathbf{u}^{K}\,\Big\}, \label{eq:dg_weak_ode} \end{equation} where components of the \emph{mass matrix} and \emph{stiffness matrix} are defined as \begin{equation} M_{ij}^{K} = \int_{K}b_{i}^{K}\,b_{j}^{K}\,dx \quad\text{and}\quad S_{ij}^{K} = a\,\int_{K}(\pd{b_{i}^{K}}{x})\,b_{j}^{K}\,dx, \label{eq:dg_matrices} \end{equation} respectively. Since the basis functions are polynomials, the integrals in Eq.~\eqref{eq:dg_matrices} can be computed exactly with, e.g., Gaussian quadratures. Eq.~\eqref{eq:dg_weak_ode} is now a system of ODEs, which can be integrated in time with an ODE solver. For non-stiff problems, explicit Runge--Kutta methods can be used. The DG method has been used to develop structure-preserving methods in a range of applications; see for example \citet{ZhSh10b} and \citet{WuTa16} for physical-constraint-preserving methods for the non-relativistic and relativistic Euler equations, respectively, \citet{LiXi18} for a steady-state preserving method for the Euler equations with gravitation, and \citet{JuHaTe18} for an energy-conserving DG method for kinetic plasma simulations. We also mention the work of \cite{HeHa20}, where DG and finite-volume methods are combined to a hybrid transport scheme that captures the diffusion limit and is more efficient in terms of memory usage and computational time than the corresponding DG-only scheme. \subsubsection{Bound-preserving methods} \label{sec:boundPreserving} \citet{ZhSh10a} developed a general framework for ``maximum-principle-preserving'', high-order methods for scalar conservation laws \citep[see also][]{ZhSh11}. Inspired by this work, \citet{EnHaXi15} developed bound-preserving methods in the context of neutrino transport, aiming to maintain a distribution function satisfying $f\in[0,1]$, as dictated by Pauli's exclusion principle. They considered the (collisionless) phase-space advection problem in curvilinear coordinates, and included a general relativistic example in spherical symmetry with a time-independent spacetime metric given by \begin{equation} ds^{2} = -\alpha^{2}\,dt^{2} + \gamma_{ij}\,dx^{i}\,dx^{j}, \quad\text{with}\quad\gamma_{ij}=\psi^{4}\mbox{diag}\big[\,1,r^{2},r^{2}\sin^{2}\theta\,\big], \end{equation} where $\alpha$ is the lapse function and $\psi$ the conformal factor. Under these assumptions, the Boltzmann takes the form \begin{align} & \f{1}{\alpha}\pderiv{f}{t} +\f{1}{\alpha\,\psi^{6}\,r^{2}}\pderiv{}{r}\Big(\,\alpha\,\psi^{4}\,r^{2}\,\mu\,f\,\Big) -\f{1}{\varepsilon^{2}}\pderiv{}{\varepsilon} \Big(\,\varepsilon^{3}\,\f{1}{\psi^{2}\,\alpha}\pderiv{\alpha}{r}\,\mu\,f\,\Big) \nonumber \\ & \hspace{12pt} +\pderiv{}{\mu} \Big(\,\big(1-\mu^{2}\big)\,\psi^{-2}\, \Big\{\, \f{1}{r} +\f{1}{\psi^{2}}\pderiv{\psi^{2}}{r} -\f{1}{\alpha}\pderiv{\alpha}{r} \,\Big\}\,f \,\Big) =0, \label{eq:ConservativeBoltzmannEquationSphericalSymmetryGR} \end{align} where $r\ge0$ is the radius, $\mu\in[-1,1]$ the momentum space angle cosine, and $\varepsilon\ge0$ is the neutrino energy. By defining phase-space coordinates $z^{1}=r$, $z^{2}=\mu$, and $z^{3}=\varepsilon$, the phase space volume Jacobian $\tau=\psi^{6}\,r^{2}\,\varepsilon^{2}$, and \begin{equation} H^{1} = H^{(r)}= \f{\alpha}{\psi^{2}}\mu,\quad H^{2} = H^{(\mu)} = \f{\alpha\big(1-\mu^{2}\big)}{\psi^{2}r}\,\Psi,\quad\text{and}\quad H^{3} = H^{(\varepsilon)} = - \f{\varepsilon}{\psi^{2}}\pderiv{\alpha}{r}\mu, \label{eq:phaseSpaceFluxCoefficients} \end{equation} where \begin{equation} \Psi = 1+r\,\pd{\ln\psi^{2}}{r}-r\,\pd{\ln\alpha}{r}, \end{equation} Eq.~\eqref{eq:ConservativeBoltzmannEquationSphericalSymmetryGR} can be written in the compact form \begin{equation} \pderiv{f}{t}+\f{1}{\tau}\sum_{i=1}^{3}\pderiv{}{z^{i}}\Big(\,\tau\,H^{i}f\,\Big) = 0. \label{eq:ConservativeBoltzmannCompact} \end{equation} It is straightforward to show that \begin{equation} \f{1}{\tau}\sum_{i=1}^{3}\pderiv{}{z^{i}}\Big(\,\tau\,H^{i}\,\Big) = 0 \label{eq:DivergenceFreeCondition} \end{equation} holds. The divergence-free condition on the phase-space flow in Eq.~\eqref{eq:DivergenceFreeCondition} plays an important role in maintaining $f\le1$. \citet{EnHaXi15} employed the discontinuous Galerkin (DG) method \citep[see, e.g.,][and references therin]{CoSh01,Shu16} to solve Eq.~\eqref{eq:ConservativeBoltzmannEquationSphericalSymmetryGR}. To this end, the phase space domain $D$ is divided into a triangulation $\mathcal{T}$ of elements $\mathbf{K}$, so that $D = \cup_{\mathbf{K} \in \mathcal{T}}$. Each element is a logically Cartesian box \begin{equation} \mathbf{K}=\{(r,\mu,\varepsilon)\in\mathbb{R}^{3} : r\in K^{(r)}:=(r_{\textnormal{\tiny\textsc{L}}},r_{\textnormal{\tiny\textsc{H}}}),\, \mu\in K^{(\mu)}:=(\mu_{\textnormal{\tiny\textsc{L}}},\mu_{\textnormal{\tiny\textsc{H}}}),\, \varepsilon\in K^{(\varepsilon)}:=(\varepsilon_{\textnormal{\tiny\textsc{L}}},\varepsilon_{\textnormal{\tiny\textsc{H}}})\}, \end{equation} where $z_{\textnormal{\tiny\textsc{L}}}^{i}$ and $z_{\textnormal{\tiny\textsc{H}}}^{i}$ are, respectively, the coordinates of the lower and higher boundaries of $\mathbf{K}$ in the $i$th dimension. On each element, the approximation space for the DG method, $\mathbb{V}_{h}^k$, is \begin{equation}\label{ldg:vhk} \mathbb{V}_{h}^{k}=\{\varphi_{h} : \varphi_{h}\big|_{\mathbf{K}} \in \mathbb{Q}^{k}(\mathbf{K}), \, \, \forall\ \mathbf{K}\in \mathcal{T} \}, \end{equation} where $\mathbb{Q}^{k}$ is the space of tensor products of one-dimensional polynomials of maximal degree $k$. The approximation to the distribution function, $f_{h}$, is then expressed as \begin{equation} f_{h}(\mathbf{z},t)=\sum_{i=1}^{(k+1)^{3}}C_{i}(t)\,P_{i}(\mathbf{z}), \end{equation} where each $P_{i}\in\mathbb{V}_{h}^{k}$. Note that functions in $\mathbb{V}_{h}^k$ can be discontinuous across element interfaces. Then, for any $(r,\mu,\varepsilon) \in D$ and any $\varphi_{h} \in \mathbb{V}_{h}^{k}$, the DG method is as follows: \textit{Find $f_{h} \in \mathbb{V}_{h}^{k}$ such that} \begin{align} & \int_{\mathbf{K}}\pd{}{t}f_{h}\,\varphi_{h}\,dV -\int_{\mathbf{K}}H^{(r)}f_{h}\pd{\varphi_{h}}{r}\,dV -\int_{\mathbf{K}}H^{(\mu)}f_{h}\pd{\varphi_{h}}{\mu}\,dV -\int_{\mathbf{K}}H^{(\varepsilon)}f_{h}\,\pd{\varphi_{h}}{\varepsilon}\,dV \nonumber \\ & \hspace{12pt} + \int_{\tilde{K}^{(r)}}\widehat{H^{(r)}f_{h}}(r_{\textnormal{\tiny\textsc{H}}},\mu,\varepsilon)\,\varphi_{h}(r_{\textnormal{\tiny\textsc{H}}}^{-},\mu,\varepsilon)\,\tau(r_{\textnormal{\tiny\textsc{H}}},\varepsilon)\,d\tilde{V}^{(r)} \nonumber \\ & \hspace{48pt} - \int_{\tilde{K}^{(r)}}\widehat{H^{(r)}f_{h}}(r_{\textnormal{\tiny\textsc{L}}},\mu,\varepsilon)\,\varphi_{h}(r_{\textnormal{\tiny\textsc{L}}}^{+},\mu,\varepsilon)\,\tau(r_{\textnormal{\tiny\textsc{L}}},\varepsilon)\,d\tilde{V}^{(r)} \nonumber \\ & \hspace{12pt} + \int_{\tilde{K}^{(\mu)}}\widehat{H^{(\mu)}f_{h}}(r,\mu_{\textnormal{\tiny\textsc{H}}},\varepsilon)\,\varphi_{h}(r,\mu_{\textnormal{\tiny\textsc{H}}}^{-},\varepsilon)\,\tau(r,\varepsilon)\,d\tilde{V}^{(\mu)} \nonumber \\ & \hspace{48pt} - \int_{\tilde{K}^{(\mu)}}\widehat{H^{(\mu)}f_{h}}(r,\mu_{\textnormal{\tiny\textsc{L}}},\varepsilon)\,\varphi_{h}(r,\mu_{\textnormal{\tiny\textsc{L}}}^{+},\varepsilon)\,\tau(r,\varepsilon)\,d\tilde{V}^{(\mu)} \nonumber \\ & \hspace{12pt} + \int_{\tilde{K}^{(\varepsilon)}}\widehat{H^{(\varepsilon)}f_{h}}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}})\,\varphi_{h}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}}^{-})\,\tau(r,\varepsilon_{\textnormal{\tiny\textsc{H}}})\,d\tilde{V}^{(\varepsilon)} \nonumber \\ & \hspace{48pt} - \int_{\tilde{K}^{(\varepsilon)}}\widehat{H^{(\varepsilon)}f_{h}}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{L}}})\,\varphi_{h}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{L}}}^{+})\,\tau(r,\varepsilon_{\textnormal{\tiny\textsc{L}}})\,d\tilde{V}^{(\varepsilon)}=0, \label{eq:ConservativeBoltzmannSphericalSymmetryGRDG} \end{align} where the infinitesimal phase-space volume and ``area'' elements are \begin{equation} dV=\tau\,dr\,d\mu\,d\varepsilon,\quad d\tilde{V}^{(r)}=d\mu\,d\varepsilon,\quad d\tilde{V}^{(\mu)}=dr\,d\varepsilon,\quad d\tilde{V}^{(\varepsilon)}=dr\,d\mu, \end{equation} and the subelements are \begin{equation} \tilde{K}^{(r)}=K^{(\mu)}\times K^{(\varepsilon)},\quad \tilde{K}^{(\mu)}=K^{(r)}\times K^{(\varepsilon)},\quad \tilde{K}^{(\varepsilon)}=K^{(r)}\times K^{(\mu)}. \end{equation} In Eq.~\eqref{eq:ConservativeBoltzmannSphericalSymmetryGRDG}, upwind fluxes are used for the numerical fluxes on element interfaces: \begin{align} \widehat{H^{(r)}f_{h}}(r_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}},\mu,\varepsilon) &=\mathcal{H}^{(r)}\big(f_{h}(r_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-},\mu,\varepsilon),f_{h}(r_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+},\mu,\varepsilon); r_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}},\mu,\varepsilon\big) \label{eq:numericalFluxFunction_R} \\ &=\f{\alpha_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}}{\psi_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{2}} \Big\{\, \f{1}{2}\big(\mu+|\mu|\big)\,f_{h}(r_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-},\mu,\varepsilon)+\f{1}{2}\big(\mu-|\mu|\big)\,f_{h}(r_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+},\mu,\varepsilon) \,\Big\}, \nonumber \\ \widehat{H^{(\mu)}f_{h}}(r,\mu_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}},\varepsilon) &=\mathcal{H}^{(\mu)}\big(f_{h}(r,\mu_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-},\varepsilon),f_{h}(r,\mu_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+},\varepsilon); r,\mu_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}},\varepsilon\big) \label{eq:numericalFluxFunction_Mu} \\ &=\f{\alpha}{\psi^{2}\,r}\,(1-\mu_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{2})\, \Big\{\, \f{1}{2}\big(\Psi+|\Psi|\big)\,f_{h}(r,\mu_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-},\varepsilon) \nonumber \\ &\hspace{108pt} +\f{1}{2}\big(\Psi-|\Psi|\big)\,f_{h}(r,\mu_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+},\varepsilon) \,\Big\}, \nonumber \\ \widehat{H^{(\varepsilon)}f_{h}}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}) &=\mathcal{H}^{(\varepsilon)}\big(f_{h}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-}),f_{h}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+}); r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}\big) \label{eq:numericalFluxFunction_E} \\ &=-\f{\varepsilon_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}}{\psi^{2}} \Big\{\, \f{1}{2}\big(\pd{}{r}\alpha\,\mu-|\pd{}{r}\alpha\,\mu|\big)\,f_{h}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-}) \nonumber \\ &\hspace{66pt} +\f{1}{2}\big(\pd{}{r}\alpha\,\mu+|\pd{}{r}\alpha\,\mu|\big)\,f_{h}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+}) \,\Big\}. \nonumber \end{align} Key to the design of a time-explicit, bound-preserving method for Eq.~\eqref{eq:ConservativeBoltzmannEquationSphericalSymmetryGR} is to find conditions such that, after the update from $f_{h}^{n}$ to $f_{h}^{n+1}$ with time step $\Delta t=t^{n+1}-t^{n}$, the cell-averaged distribution function, defined as \begin{equation} f_{\mathbf{K}}=\f{1}{V_{\mathbf{K}}}\int_{\mathbf{K}}f_{h}\,dV, \quad\text{where}\quad V_{\mathbf{K}} =\int_{\mathbf{K}}dV, \label{eq:boundPreservingCellAverage} \end{equation} satisfies the bounds; i.e., $f_{\mathbf{K}}^{n+1}\in[0,1]$. The standard approach is to find sufficient conditions such that these bounds hold with the first-order forward Euler method, while the extension to higher-order accuracy in time relies on the use of a strong stability-preserving (SSP) time stepping method, which can be expressed as convex combinations of forward Euler operators \citep{GoShTa01}. The conditions that are sought include a time step restriction. Then, if the bounds on the cell average at $t^{n+1}$ hold with the forward Euler method provided $\Delta t\le\Delta t_{\mathrm{FE}}$ (where $\Delta t_{\mathrm{FE}}$ is to be determined), the bounds will also hold when an SSP method is used, provided $\Delta t\le C_{\mathrm{SSP}}\times\Delta t_{\mathrm{FE}}$, where $0<C_{\mathrm{SSP}}\le1$. For the optimal second- and third-order SSP Runge--Kutta (SSP-RK) methods from \citet{ShOs88}, $C_{\mathrm{SSP}}=1$. The equation for the cell-average is obtained from Eq.~\eqref{eq:ConservativeBoltzmannSphericalSymmetryGRDG} with $\varphi_{h}=1$ (the lowest possible degree polynomial in the approximation space $\mathbb{V}_{h}^{k}$). With forward Euler time stepping, we then have \begin{align} f_{\mathbf{K}}^{n+1} &=f_{\mathbf{K}}^{n} -\f{\Delta t}{V_{\mathbf{K}}} \Big\{\, \psi^{6}(r_{\textnormal{\tiny\textsc{H}}})\,r_{\textnormal{\tiny\textsc{H}}}^{2}\int_{\tilde{K}^{(r)}}\widehat{H^{(r)}f_{h}^{n}}(r_{\textnormal{\tiny\textsc{H}}},\mu,\varepsilon)\,\varepsilon^{2}\,d\tilde{V}^{(r)} \nonumber \\ &\hspace{96pt} -\psi^{6}(r_{\textnormal{\tiny\textsc{L}}})\,r_{\textnormal{\tiny\textsc{L}}}^{2}\int_{\tilde{K}^{(r)}}\widehat{H^{(r)}f_{h}^{n}}(r_{\textnormal{\tiny\textsc{L}}},\mu,\varepsilon)\,\varepsilon^{2}\,d\tilde{V}^{(r)} \nonumber \\ &\hspace{72pt} +\int_{\tilde{K}^{(\mu)}}\widehat{H^{(\mu)}f_{h}^{n}}(r,\mu_{\textnormal{\tiny\textsc{H}}},\varepsilon)\,\psi^{6}(r)\,r^{2}\,\varepsilon^{2}\,d\tilde{V}^{(\mu)} \nonumber \\ &\hspace{96pt} -\int_{\tilde{K}^{(\mu)}}\widehat{H^{(\mu)}f_{h}^{n}}(r,\mu_{\textnormal{\tiny\textsc{L}}},\varepsilon)\,\psi^{6}(r)\,r^{2}\,\varepsilon^{2}\,d\tilde{V}^{(\mu)} \nonumber \\ &\hspace{72pt} +\varepsilon_{\textnormal{\tiny\textsc{H}}}^{2}\int_{\tilde{K}^{(\varepsilon)}}\widehat{H^{(\varepsilon)}f_{h}^{n}}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}})\,\psi^{6}(r)\,r^{2}\,d\tilde{V}^{(\varepsilon)} \nonumber \\ &\hspace{96pt} -\varepsilon_{\textnormal{\tiny\textsc{L}}}^{2}\int_{\tilde{K}^{(\varepsilon)}}\widehat{H^{(\varepsilon)}f_{h}^{n}}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{L}}})\,\psi^{6}(r)\,r^{2}\,d\tilde{V}^{(\varepsilon)} \,\Big\}. \label{eq:averageUpdateSphericalSymmetryGR} \end{align} Assuming that $f_{\mathbf{K}}^{n}\in[0,1]$, the flux terms (which can be positive or negative) can bring $f_{\mathbf{K}}^{n+1}$ outside the bounds. The contributions from these terms vanish as $\Delta t\to0$, and this is where restrictions on the time step comes in. To find these restriction, $f_{\mathbf{K}}^{n}$ is split into three parts and combined with the flux terms arising from the three phase-space dimensions in the current setting. To this end, we define positive constants $s_{1},s_{2},s_{3}>0$, satisfying $s_{1}+s_{2}+s_{3}=1$, and write the cell-average as \begin{align} f_{\mathbf{K}}^{n} &= \f{s_{1}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(r)}}\int_{K^{(r)}}f_{h}^{n}\,\tau\,dr\,d\tilde{V}^{(r)} + \f{s_{2}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\mu)}}\int_{K^{(\mu)}}f_{h}^{n}\,\tau\,d\mu\,d\tilde{V}^{(\mu)} \nonumber \\ &\hspace{12pt} + \f{s_{3}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\varepsilon)}}\int_{K^{(\varepsilon)}}f_{h}^{n}\,\tau\,d\varepsilon\,d\tilde{V}^{(\varepsilon)}. \end{align} Inserting this into Eq.~\eqref{eq:averageUpdateSphericalSymmetryGR} gives \begin{equation} f_{\mathbf{K}}^{n+1} = \f{s_{1}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(r)}}\Gamma^{(r)}[f_{h}^{n}]d\tilde{V}^{(r)} + \f{s_{2}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\mu)}}\Gamma^{(\mu)}[f_{h}^{n}]d\tilde{V}^{(\mu)} + \f{s_{3}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\varepsilon)}}\Gamma^{(\varepsilon)}[f_{h}^{n}]d\tilde{V}^{(\varepsilon)}, \label{eq:averageUpdateInTermsOfGammaSphericalSymmetryGR} \end{equation} where \begin{align} &\Gamma^{(r)}[f_{h}^{n}](\mu,\varepsilon) \label{eq:Gamma1} \\ &=\int_{K^{(r)}}f_{h}^{n}\,\tau\,dr - \f{\Delta t\,\varepsilon^{2}}{s_{1}}\Big\{\,\psi^{6}(r_{\textnormal{\tiny\textsc{H}}})\,r_{\textnormal{\tiny\textsc{H}}}^{2}\,\widehat{H^{(r)}f_{h}^{n}}(r_{\textnormal{\tiny\textsc{H}}},\mu,\varepsilon)-\psi^{6}(r_{\textnormal{\tiny\textsc{L}}})\,r_{\textnormal{\tiny\textsc{L}}}^{2}\,\widehat{H^{(r)}f_{h}^{n}}(r_{\textnormal{\tiny\textsc{L}}},\mu,\varepsilon)\,\Big\}, \nonumber \\ &\Gamma^{(\mu)}[f_{h}^{n}](r,\varepsilon) \label{eq:Gamma2} \\ &=\int_{K^{(\mu)}}f_{h}^{n}\,\tau\,d\mu -\f{\Delta t\tau}{s_{2}}\Big\{\,\widehat{H^{(\mu)}f_{h}^{n}}(r,\mu_{\textnormal{\tiny\textsc{H}}},\varepsilon) - \widehat{H^{(\mu)}f_{h}^{n}}(r,\mu_{\textnormal{\tiny\textsc{L}}},\varepsilon) \,\Big\}, \nonumber \\ &\Gamma^{(\varepsilon)}[f_{h}^{n}](r,\mu) \label{eq:Gamma3} \\ &=\int_{K^{(\varepsilon)}}f_{h}^{n}\,\tau\,d\varepsilon -\f{\Delta t\psi^{6}(r)r^{2}}{s_{3}}\Big\{\,\varepsilon_{\textnormal{\tiny\textsc{H}}}^{2}\,\widehat{H^{(\varepsilon)}f_{h}}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}})-\varepsilon_{\textnormal{\tiny\textsc{L}}}^{2}\,\widehat{H^{(\varepsilon)}f_{h}}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{L}}})\,\Big\}. \nonumber \end{align} With the cell-average expressed as in Eq.~\eqref{eq:averageUpdateInTermsOfGammaSphericalSymmetryGR}, in order to ensure $f_{\mathbf{K}}^{n+1}\ge0$, it is sufficient to find conditions for which each of the right-hand sides in Eqs.~\eqref{eq:Gamma1}, \eqref{eq:Gamma2}, \eqref{eq:Gamma3} are nonnegative. We illustrate the details of this for Eq.~\eqref{eq:Gamma1} \citep[see][for full details]{EnHaXi15}. In the DG method, the integrals over the faces $\tilde{K}^{(r)}$, $\tilde{K}^{(\mu)}$, and $\tilde{K}^{(\varepsilon)}$ in Eq.~\eqref{eq:averageUpdateInTermsOfGammaSphericalSymmetryGR} are typically evaluated with a quadrature rule. In this case, it is sufficient that $\Gamma^{(r)},\Gamma^{(\mu)},\Gamma^{(\varepsilon)}\ge0$ holds in the respective quadrature points. As an example, we let $\tilde{\mathbf{S}}^{(r)}(\in\tilde{K}^{(r)})$ denote the set of quadrature points used to integrate over $\tilde{K}^{(r)}$ in Eq.~\eqref{eq:averageUpdateInTermsOfGammaSphericalSymmetryGR}. To evaluate the integral on the right-hand side of Eq.~\eqref{eq:Gamma1}, an $N^{(r)}$-point Gauss-Lobatto quadrature rule is used on the interval $K^{(r)}$, with points \begin{align} \hat{S}^{(r)} = \big\{\,r_{\textnormal{\tiny\textsc{L}}}=\hat{r}_{1},\ldots,\hat{r}_{N^{(r)}}=r_{\textnormal{\tiny\textsc{H}}}\,\big\}, \end{align} and weights $\hat{w}_{q}\in(0,1]$, normalized such that $\sum_{q=1}^{N^{(r)}}\hat{w}_{q}=1$. This quadrature integrates polynomials in $r$ of degree $2N^{(r)}-3$ exactly. We can then write \begin{equation} \int_{K^{(r)}}f_{h}^{n}\,\tau\,dr = \Delta r\sum_{q=1}^{N^{(r)}}\hat{w}_{q}\,f_{h}^{n}(\hat{r}_{q},\mu,\varepsilon)\,\tau(\hat{r}_{q},\mu,\varepsilon). \label{eq:radialLobattoQuadrature} \end{equation} If the distribution function is approximated with a polynomial of degree $k$ in $r$, and $\psi^{6}$ is approximated by a polynomial of degree $k_{\psi}$, the quadrature is exact if $N^{(r)}\ge(k+k_{\psi}+5)/2$. The reason for using the Gauss-Lobatto quadrature for the integral over $K^{(r)}$ is because it includes the end points of the interval ($r_{\textnormal{\tiny\textsc{L}}},r_{\textnormal{\tiny\textsc{H}}}$). These end points are used to balance the flux terms in the radial dimension. Inserting Eq.~\eqref{eq:radialLobattoQuadrature} into Eq.~\eqref{eq:Gamma1} gives \begin{align} \f{1}{\Delta r}\Gamma^{(r)}[f_{h}^{n}] &=\sum_{q=1}^{N^{(r)}}\hat{w}_{q}\,f_{h}^{n}(\hat{r}_{q})\,\tau(\hat{r}_{q}) \nonumber \\ &\hspace{12pt} - \f{\Delta t\,\varepsilon^{2}}{s_{1}}\Big\{\,\psi^{6}(r_{\textnormal{\tiny\textsc{H}}})\,r_{\textnormal{\tiny\textsc{H}}}^{2}\,\mathcal{H}^{(r)}\big(f_{h}(r_{\textnormal{\tiny\textsc{H}}}^{-}),f_{h}(r_{\textnormal{\tiny\textsc{H}}}^{+}); r_{\textnormal{\tiny\textsc{H}}}\big) \nonumber \\ &\hspace{56pt} -\psi^{6}(r_{\textnormal{\tiny\textsc{L}}})\,r_{\textnormal{\tiny\textsc{L}}}^{2}\,\mathcal{H}^{(r)}\big(f_{h}(r_{\textnormal{\tiny\textsc{L}}}^{-}),f_{h}(r_{\textnormal{\tiny\textsc{L}}}^{+}); r_{\textnormal{\tiny\textsc{L}}}\big)\,\Big\} \nonumber \\ &=\sum_{q=2}^{N^{(r)}-1}\hat{w}_{q}\,f_{h}^{n}(\hat{r}_{q})\,\tau(\hat{r}_{q}) + \hat{w}_{1}\,\Phi_{1}^{(r)}\big[f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+})\big]\,\tau(r_{\textnormal{\tiny\textsc{L}}}) \nonumber \\ &\hspace{12pt} +\hat{w}_{N^{(r)}}\,\Phi_{N^{(r)}}^{(r)}\big[f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{+})\big]\,\tau(r_{\textnormal{\tiny\textsc{H}}}), \end{align} where \begin{align} \Phi_{1}^{(r)}\big[f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+})\big] &=f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+})+\f{\Delta t}{s_{1}\hat{w}_{1}\Delta r}\,\mathcal{H}^{(r)}\big(f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+}); r_{\textnormal{\tiny\textsc{L}}}\big), \\ \Phi_{N^{(r)}}^{(r)}\big[f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{+})\big] &=f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{-})-\f{\Delta t}{s_{1}\hat{w}_{N^{(r)}}\Delta r}\,\mathcal{H}^{(r)}\big(f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{+}); r_{\textnormal{\tiny\textsc{H}}}\big). \end{align} (For notational brevity, we have suppressed the $(\mu,\varepsilon)$-dependence.) Using the numerical flux function in Eq.~\eqref{eq:numericalFluxFunction_R}, one can write \begin{align} &\Phi_{1}^{(r)}\big[f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+})\big] \label{eq:PhiOne_R} \\ &= f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+}) +\f{\Delta t}{s_{1}\hat{w}_{1}\Delta r}\,\f{\alpha(r_{\textnormal{\tiny\textsc{L}}})}{\psi^{2}(r_{\textnormal{\tiny\textsc{L}}})} \Big\{\, \f{1}{2}\big(\mu+|\mu|\big)\,f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{-})+\f{1}{2}\big(\mu-|\mu|\big)\,f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+}) \,\Big\} \nonumber \\ &=\f{\Delta t}{s_{1}\hat{w}_{1}\Delta r}\,\f{\alpha(r_{\textnormal{\tiny\textsc{L}}})}{\psi^{2}(r_{\textnormal{\tiny\textsc{L}}})}\,\f{1}{2}\big(\mu+|\mu|\big)\,f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{-}) +\Big\{\,1+\f{\Delta t}{s_{1}\hat{w}_{1}\Delta r}\,\f{\alpha(r_{\textnormal{\tiny\textsc{L}}})}{\psi^{2}(r_{\textnormal{\tiny\textsc{L}}})}\,\f{1}{2}\big(\mu-|\mu|\big)\,\Big\}\,f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+}). \nonumber \end{align} On the right-hand side of Eq.~\eqref{eq:PhiOne_R} (last line), the coefficient in front of $f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{-})$ is nonnegative since $\alpha(r_{\textnormal{\tiny\textsc{L}}}),\psi^{2}(r_{\textnormal{\tiny\textsc{L}}})>0$ and $\big(\mu+|\mu|\big)\ge0$. Only the coefficient in front of $f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+})$ can become negative since $\big(\mu-|\mu|\big)\le0$. Assuming $f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+})\ge0$, it is easy to show that $\Phi_{1}^{(r)}\big[f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{L}}}^{+})\big]\ge0$, if \begin{equation} \Delta t\le\f{s_{1}\hat{w}_{1}\Delta r}{|\mu|}\,\f{\psi^{2}(r_{\textnormal{\tiny\textsc{L}}})}{\alpha(r_{\textnormal{\tiny\textsc{L}}})}. \end{equation} Similarly, for $f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{+})\ge0$, one finds that $\Phi_{N^{(r)}}^{(r)}\big[f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{-}),f_{h}^{n}(r_{\textnormal{\tiny\textsc{H}}}^{+})\big]\ge0$, if \begin{equation} \Delta t\le\f{s_{1}\hat{w}_{N^{(r)}}\Delta r}{|\mu|}\,\f{\psi^{2}(r_{\textnormal{\tiny\textsc{H}}})}{\alpha(r_{\textnormal{\tiny\textsc{H}}})}. \end{equation} Therefore, assuming $f_{h}^{n}\ge0$ in the combined quadrature set $\mathbf{S}^{(r)}=\hat{S}^{(r)}\otimes\tilde{\mathbf{S}}^{(r)}$, where the points in $\hat{S}^{(r)}$ are used to evaluate the integral over $K^{(r)}$ in Eq.~\eqref{eq:Gamma1} and the points in $\tilde{\mathbf{S}}^{(r)}$ are used to evaluate the integral over $\tilde{K}^{(r)}$ in Eq.~\eqref{eq:averageUpdateInTermsOfGammaSphericalSymmetryGR}, a sufficient condition on the time step to guarantee $\int_{\tilde{K}^{(r)}}\Gamma^{(r)}[f_{h}^{n}]d\tilde{V}^{(r)}\ge0$ is given by \begin{align} \Delta t &\le\min\big(\psi^{2}(r_{\textnormal{\tiny\textsc{L}}})/\alpha(r_{\textnormal{\tiny\textsc{L}}}),\psi^{2}(r_{\textnormal{\tiny\textsc{H}}})/\alpha(r_{\textnormal{\tiny\textsc{H}}})\big)\,\hat{w}_{N^{(r)}}\,s_{1}\,\Delta r. \label{eq:CFLSphericalSymmetryRadius} \end{align} (Here, $\hat{w}_{1}=\hat{w}_{N^{(r)}}$ is used.) Sufficient conditions on $\Delta t$ for $\int_{\tilde{K}^{(\mu)}}\Gamma^{(\mu)}[f_{h}^{n}]d\tilde{V}^{(\mu)}\ge0$ and $\int_{\tilde{K}^{(\varepsilon)}}\Gamma^{(\varepsilon)}[f_{h}^{n}]d\tilde{V}^{(\varepsilon)}\ge0$ can be derived in a similar way (we refer the interested reader to \citet{EnHaXi15} for details). Together, these restrictions on the time step ensures $f_{\mathbf{K}}^{n+1}\ge0$. It should be noted that the time step restrictions derived here are sufficient, not necessary, conditions. They are typically more restrictive than the time step required for numerical stability. Thus, in practical calculations, larger time steps may be taken. If violations of the physical bounds are detected after a time step, $\Delta t$ can be reduced to the sufficient conditions before the time step is redone. The proof for $f_{\mathbf{K}}^{n+1}\le1$ relies on the divergence-free condition in Eq.~\eqref{eq:DivergenceFreeCondition}, which can be written as \begin{align} \f{1}{V_{\mathbf{K}}} \Big\{\, &\psi^{6}(r_{\textnormal{\tiny\textsc{H}}})\,r_{\textnormal{\tiny\textsc{H}}}^{2}\int_{\tilde{K}^{(r)}}H^{(r)}(r_{\textnormal{\tiny\textsc{H}}},\mu,\varepsilon)\,\varepsilon^{2}\,d\tilde{V}^{(r)} \nonumber \\ &\hspace{36pt} -\psi^{6}(r_{\textnormal{\tiny\textsc{L}}})\,r_{\textnormal{\tiny\textsc{L}}}^{2}\int_{\tilde{K}^{(r)}}H^{(r)}(r_{\textnormal{\tiny\textsc{L}}},\mu,\varepsilon)\,\varepsilon^{2}\,d\tilde{V}^{(r)} \nonumber \\ &+\int_{\tilde{K}^{(\mu)}}H^{(\mu)}(r,\mu_{\textnormal{\tiny\textsc{H}}},\varepsilon)\,\psi^{6}(r)\,r^{2}\,\varepsilon^{2}\,d\tilde{V}^{(\mu)} \nonumber \\ &\hspace{36pt} -\int_{\tilde{K}^{(\mu)}}H^{(\mu)}(r,\mu_{\textnormal{\tiny\textsc{L}}},\varepsilon)\,\psi^{6}(r)\,r^{2}\,\varepsilon^{2}\,d\tilde{V}^{(\mu)} \nonumber \\ &+\varepsilon_{\textnormal{\tiny\textsc{H}}}^{2}\int_{\tilde{K}^{(\varepsilon)}}H^{(\varepsilon)}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}})\,\psi^{6}(r)\,r^{2}\,d\tilde{V}^{(\varepsilon)} \nonumber \\ &\hspace{36pt} -\varepsilon_{\textnormal{\tiny\textsc{L}}}^{2}\int_{\tilde{K}^{(\varepsilon)}}H^{(\varepsilon)}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{L}}})\,\psi^{6}(r)\,r^{2}\,d\tilde{V}^{(\varepsilon)} \,\Big\} = 0. \label{eq:divergenceFreeSphericalSymmetryGR} \end{align} In Eq.~\eqref{eq:ConservativeBoltzmannSphericalSymmetryGRDG}, we approximate the derivatives $\partial_{r}\alpha$ and $\partial_{r}\psi^{4}$ in $K^{(r)}$ (appearing in $H^{(\mu)}$ and $H^{(\varepsilon)}$; cf.\ Eq.~\eqref{eq:phaseSpaceFluxCoefficients}) with polynomials and compute $\alpha$ and $\psi^{4}$ from \begin{equation} \alpha(r)=\alpha(r_{\textnormal{\tiny\textsc{L}}})+\int_{r_{\textnormal{\tiny\textsc{L}}}}^{r}\pd{}{r}\alpha(r')\,dr'\quad\mbox{and}\quad \psi^{4}(r)=\psi^{4}(r_{\textnormal{\tiny\textsc{L}}})+\int_{r_{\textnormal{\tiny\textsc{L}}}}^{r}\pd{}{r}\psi^{4}(r')\,dr', \end{equation} where the Gaussian quadrature rule is used to evaluate the integrals exactly. Two-dimensional Gaussian quadrature rules are also used to evaluate the integrals over $\tilde{K}^{(r)}$, $\tilde{K}^{(\mu)}$, and $\tilde{K}^{(\varepsilon)}$, using $L^{(r)}$, $L^{(\mu)}$, and $L^{(\varepsilon)}$ points in the $r$, $\mu$, and $\varepsilon$ dimensions, respectively. With this choice, it is straightforward to show that the discretization satisfies the divergence-free condition \eqref{eq:divergenceFreeSphericalSymmetryGR}, provided $L^{(\mu)}\ge1$, $L^{(\varepsilon)}\ge2$, while $L^{(r)}$ depends on the degree of the polynomials approximating $\pd{}{r}\alpha$ and $\pd{}{r}\psi^{4}$. Using the definitions in Eqs.~\eqref{eq:Gamma1}--\eqref{eq:Gamma3}, a direct calculation shows that \begin{align} &\f{s_{1}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(r)}}\Gamma^{(r)}[1]d\tilde{V}^{(r)} + \f{s_{2}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\mu)}}\Gamma^{(\mu)}[1]d\tilde{V}^{(\mu)} + \f{s_{3}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\varepsilon)}}\Gamma^{(\varepsilon)}[1]d\tilde{V}^{(\varepsilon)} \nonumber \\ &=\f{s_{1}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(r)}}\int_{K^{(r)}}\tau\,dr\,d\tilde{V}^{(r)} + \f{s_{2}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\mu)}}\int_{K^{(\mu)}}\tau\,d\mu\,d\tilde{V}^{(\mu)} + \f{s_{3}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\varepsilon)}}\int_{K^{(\varepsilon)}}\tau\,d\varepsilon\,d\tilde{V}^{(\varepsilon)} \nonumber \\ &\hspace{12pt} -\f{\Delta t}{V_{\mathbf{K}}} \Big\{\, \psi^{6}(r_{\textnormal{\tiny\textsc{H}}})\,r_{\textnormal{\tiny\textsc{H}}}^{2}\int_{\tilde{K}^{(r)}}H^{(r)}(r_{\textnormal{\tiny\textsc{H}}},\mu,\varepsilon)\,\varepsilon^{2}\,d\tilde{V}^{(r)} -\psi^{6}(r_{\textnormal{\tiny\textsc{L}}})\,r_{\textnormal{\tiny\textsc{L}}}^{2}\int_{\tilde{K}^{(r)}}H^{(r)}(r_{\textnormal{\tiny\textsc{L}}},\mu,\varepsilon)\,\varepsilon^{2}\,d\tilde{V}^{(r)} \nonumber \\ &\hspace{36pt} +\int_{\tilde{K}^{(\mu)}}H^{(\mu)}(r,\mu_{\textnormal{\tiny\textsc{H}}},\varepsilon)\,\psi^{6}(r)\,r^{2}\,\varepsilon^{2}\,d\tilde{V}^{(\mu)} - \int_{\tilde{K}^{(\mu)}}H^{(\mu)}(r,\mu_{\textnormal{\tiny\textsc{L}}},\varepsilon)\,\psi^{6}(r)\,r^{2}\,\varepsilon^{2}\,d\tilde{V}^{(\mu)} \nonumber \\ &\hspace{36pt} +\varepsilon_{\textnormal{\tiny\textsc{H}}}^{2}\int_{\tilde{K}^{(\varepsilon)}}H^{(\varepsilon)}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{H}}})\,\psi^{6}(r)\,r^{2}\,d\tilde{V}^{(\varepsilon)}-\varepsilon_{\textnormal{\tiny\textsc{L}}}^{2}\int_{\tilde{K}^{(\varepsilon)}}H^{(\varepsilon)}(r,\mu,\varepsilon_{\textnormal{\tiny\textsc{L}}})\,\psi^{6}(r)\,r^{2}\,d\tilde{V}^{(\varepsilon)}\,\Big\} \nonumber \\ &=s_{1}+s_{2}+s_{3}=1, \label{eq:gammasWithOneSphericalSymmetryGR} \end{align} where the divergence-free condition in Eq.~\eqref{eq:divergenceFreeSphericalSymmetryGR} is used. Since the divergence-free condition holds, it is then straightforward to show that the cell-average of $g_{h}=1-f_{h}$ satisfies (cf.\ Eq.~\eqref{eq:averageUpdateInTermsOfGammaSphericalSymmetryGR}) \begin{align} g_{\mathbf{K}}^{n+1} &=1 - f_{\mathbf{K}}^{n+1} \\ &= \f{s_{1}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(r)}}\big(\Gamma^{(r)}[1]-\Gamma^{(r)}[f_{h}^{n}]\big)d\tilde{V}^{(r)} + \f{s_{2}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\mu)}}\big(\Gamma^{(\mu)}[1]-\Gamma^{(\mu)}[f_{h}^{n}]\big)d\tilde{V}^{(\mu)} \nonumber \\ &\hspace{24pt} +\f{s_{3}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\varepsilon)}}\big(\Gamma^{(\varepsilon)}[1]-\Gamma^{(\varepsilon)}[f_{h}^{n}]\big)d\tilde{V}^{(\varepsilon)} \nonumber \\ &= \f{s_{1}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(r)}}\Gamma^{(r)}[g_{h}^{n}]d\tilde{V}^{(r)} + \f{s_{2}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\mu)}}\Gamma^{(\mu)}[g_{h}^{n}]d\tilde{V}^{(\mu)} + \f{s_{3}}{V_{\mathbf{K}}}\int_{\tilde{K}^{(\varepsilon)}}\Gamma^{(\varepsilon)}[g_{h}^{n}]d\tilde{V}^{(\varepsilon)}, \nonumber \end{align} where the linearity property of the operators in Eq.~\eqref{eq:Gamma1}--\eqref{eq:Gamma3} is used; e.g., $\Gamma^{(r)}[1]-\Gamma^{(r)}[f_{h}^{n}]=\Gamma^{(r)}[1-f_{h}^{n}]=\Gamma^{(r)}[g_{h}^{n}]$. Thus, provided Eq.~\eqref{eq:divergenceFreeSphericalSymmetryGR} and the restrictions on $\Delta t$ hold, and the conditions on $f_{h}^{n}$ also hold for $g_{h}^{n}$, it follows that $g_{\mathbf{K}}^{n+1}\ge0$ (or $f_{\mathbf{K}}^{n+1}\le1$). The numerical method developed by \citet{EnHaXi15}, and outlined above, is designed to preserve the physical bounds of the cell averaged distribution function (i.e., $0\le f_{\mathbf{K}}\le1$), provided sufficiently accurate quadratures are used, specific time step restrictions are satisfied, \emph{and} that the polynomial approximating the distribution function inside each phase space element $\mathbf{K}$ at time $t^{n}$ is bounded in a set of quadrature points, which we denote $S$. After one time step, it is possible that $f_{h}^{n+1}$ violates the bounds for some points in the set $S$. In the DG method, the limiter proposed by \citet{ZhSh10a} is used to reenforce the bounds. That is, the polynomial obtained after a time step $\Delta t$, $f_{h}^{n+1}(\mathbf{z})$, is replaced by with the ``limited'' polynomial \begin{equation} \tilde{f}_{h}^{n+1}(\mathbf{z})=\vartheta\,f_{h}^{n+1}(\mathbf{z})+(\,1-\vartheta\,)\,f_{\mathbf{K}}^{n+1}, \label{eq:limitedPolynomial} \end{equation} where the limiter parameter $\vartheta\in[0,1]$ is given by \begin{equation} \vartheta=\min\Big\{\Big|\f{M-f_{\mathbf{K}}^{n+1}}{M_{S}-f_{\mathbf{K}}^{n+1}}\Big|,\Big|\f{m-f_{\mathbf{K}}^{n+1}}{m_{S}-f_{\mathbf{K}}^{n+1}}\Big|,1\Big\}, \label{eq:limiter} \end{equation} with $m=0$ and $M=1$, and \begin{equation} M_{S}=\max_{\mathbf{z} \in S}f_{h}^{n+1}(\mathbf{z}), \qquad m_{S}=\min_{\mathbf{z} \in S}f_{h}^{n+1}(\mathbf{z}), \end{equation} and $S$ represents the finite set of quadrature points in $\mathbf{K}$ where the bounds must hold. For $\vartheta=0$, the entire solution is limited to the cell-average, while for $\vartheta=1$ $\tilde{f}_{h}^{n+1}=f_{h}^{n+1}$. It is thus absolutely necessary to maintain the bounds on the cell-average, otherwise the limiting procedure will be futile. In practice, $\vartheta$ remains close to unity, and the limiting is a small correction. It has been shown \citep{ZhSh10a} that this ``linear scaling limiter'' maintains high order of accuracy. Also, note that the limiting procedure is conservative for particle number since it preserves the cell averaged distribution function; i.e., by inserting Eq.~\eqref{eq:limitedPolynomial} into the definition of the cell average in Eq.~\eqref{eq:boundPreservingCellAverage}: \begin{equation} \f{1}{V_{\mathbf{K}}}\int_{\mathbf{K}}\tilde{f}_{h}^{n+1}\,dV =\f{1}{V_{\mathbf{K}}}\int_{\mathbf{K}}\big(\,\vartheta\,f_{h}^{n+1}+(\,1-\vartheta\,)\,f_{\mathbf{K}}^{n+1}\,\big)\,dV =f_{\mathbf{K}}^{n+1}. \end{equation} In the discussion above, forward Euler time stepping is used, which is only first-order accurate. For explicit time integration, the bound-preserving scheme can easily be extended to higher-order accuracy in time by using high-order SSP time stepping methods \citep{ShOs88,GoShTa01}, which are multi-stage methods that can be formulated as convex combinations of forward Euler operators. Provided limiting is applied at each stage, the bound-preserving property follows from convexity arguments. For neutrino transport problems where neutrino--matter interactions are treated with implicit methods, it is difficult to achieve both high-order accuracy and bounded solutions, and this topic remains open for further research. We will discuss this issue further below in the context of a two-moment model. Another open issue is the challenge of simultaneous number and energy conservation in the phase space advection problem discussed here: The limiter in Eq.~\eqref{eq:limitedPolynomial} preserves the particle number, but not higher moments of the distribution function. In the present model, the space time is stationary, which implies that the so-called Komar mass ($\alpha\,\varepsilon\,f$) is conserved. Thus, if bounded solutions and exact conservation of the Komar mass is desired, modifications to the limiter is needed. \subsubsection{Realizability-preserving moment methods} \label{sec:realizabilityPreserving} \citet{ChEnHa19} developed a numerical method for a two-moment model based on DG spatial discretization and IMEX time stepping. The method is specifically designed to preserve bounds on the moments as dictated by Pauli's exclusion principle. As such, it is an extension of the bound-preserving method discussed above, but for a nonlinear system of hyperbolic balance laws with stiff sources. As is reasonable for an initial investigation, the model adopted by \citet{ChEnHa19} is rather simple, when compared to the two-moment models used to model neutrino transport in contemporary core-collapse supernova simulations. However, the work highlighted the role of the moment closure in the design of robust two-moment methods for neutrino transport, and developed an IMEX scheme with a reasonable time step restriction that is compatible with bounded solutions. As such, the work put down the foundations for a framework that may help future developments of robust methods for models with improved physical fidelity. To simplify the discussion, we consider the model in \citet{ChEnHa19} for one spatial dimension and define moments of the distribution function as \begin{equation} \big\{\,\mathcal{J},\mathcal{H},\mathcal{K}\,\big\}(x,t)=\f{1}{2}\int_{-1}^{1}f(\mu,x,t)\,\mu^{\{0,1,2\}}\,d\mu. \end{equation} The two-moment model can be written as a system of hyperbolic balance laws as \begin{equation} \pd{\mathbf{u}}{t} + \pd{\mathbf{f}(\mathbf{u})}{x} = \mathbf{\eta} - R\,\mathbf{u} \equiv \mathbf{c}(\mathbf{u}), \label{eq:twoMomentModelRealizability} \end{equation} where the evolved moment vector is $\mathbf{u}=(\mathcal{J},\mathcal{H})^{T}$, the flux vector is $\mathbf{f}=(\mathcal{H},\mathcal{K})^{T}$, the emissivity is $\mathbf{\eta}=(\sigma_{A}\,\mathcal{J}_{0},0)^{T}$, and $R=\mbox{diag}(\sigma_{A},\sigma_{T})$. Here, $\mathcal{J}_{0}$ is the zeroth moment of an equilibrium distribution function, $f_{0}$, satisfying $f_{0}\in[0,1]$ (i.e., Fermi--Dirac statistics), $\sigma_{A}\ge0$ is the absorption opacity, and $\sigma_{T}=\sigma_{A}+\sigma_{S}$, where $\sigma_{S}\ge0$ is the scattering opacity (assuming isotropic and isoenergetic scattering). In Eq.~\eqref{eq:twoMomentModelRealizability}, a closure is assumed so that $\mathcal{K}=\mathcal{K}(\mathbf{u})$. For fermions, the Pauli exclusion principle requires the distribution function to satisfy the condition $0 \le f \le 1$. This puts corresponding restrictions on realizable values for the moments of $f$. It is then interesting to study the design of a numerical method for solving the system of moment equations given by Eq.~\eqref{eq:twoMomentModelRealizability} that preserves realizability of the moments; i.e., the moments evolve within the set of admissible values as dictated by Pauli's exclusion principle. If we let \begin{equation} \mathfrak{R} := \left\{\,f~|~0\le f \le 1 ~\text{and}~0<\f{1}{2}\int_{-1}^{1}f\,d\mu<1\,\right\}, \end{equation} the moments $\mathbf{u}=(\mathcal{J},\mathcal{H})^{T}$ are realizable if they can be obtained from a distribution function $f(\mu)\in\mathfrak{R}$. The set of all realizable moments $\mathcal{R}$ is \cite[e.g.,][]{LaBa11} \begin{equation} \mathcal{R}:=\big\{\,\mathbf{u}=\big(\mathcal{J},\mathcal{H}\big)^{T}~|~\mathcal{J}\in(0,1)~\text{and}~(1-\mathcal{J})\,\mathcal{J}-|\mathcal{H}| > 0\,\big\}. \label{eq:realizableSet} \end{equation} The geometry of the set $\mathcal{R}$ in the $(\mathcal{H},\mathcal{J})$-plane is illustrated in Figure~\ref{fig:ChEnHa19_Fig1} (light blue region). For comparison, the realizable domain $\mathcal{R}^{+}$ of positive distribution functions (no upper bound on $f$), which is a cone defined by $\mathcal{J}>0$ and $\mathcal{J}-|\mathcal{H}|>0$ (light red region), is also shown. The realizable set $\mathcal{R}$ is a bounded subset of $\mathcal{R}^{+}$. Importantly, the set $\mathcal{R}$ is convex. This means that for two arbitrary elements $\mathbf{u}_{a},\mathbf{u}_{b}\in\mathcal{R}$, the convex combination $\mathbf{u}_{c} = \vartheta\,\mathbf{u}_{a} + (1-\vartheta)\,\mathbf{u}_{b}\in\mathcal{R}$, where $0\leq\vartheta\leq1$. This property is used repeatedly (sometimes in a nested fashion) to design the numerical method. \begin{figure} \includegraphics[width=\textwidth]{ChEnHa19_Fig1} \caption{Illustration of the realizable set $\mathcal{R}$ (light blue region) defined in Eq.~\eqref{eq:realizableSet}. The black lines define the boundary $\partial\mathcal{R}$. The red lines indicate the boundary of the realizable set of positive distributions $\mathcal{R}^{+}$ (light red region).} \label{fig:ChEnHa19_Fig1} \end{figure} The DG method for the two-moment model is in many ways very similar to that discussed in Sect.~\ref{sec:boundPreserving}. The computational domain $D$ is divided into elements $K=(x_{\textnormal{\tiny\textsc{L}}},x_{\textnormal{\tiny\textsc{H}}})$. One each element, the approximation space is \begin{equation}\label{mdg:vhk} \mathbb{V}_{h}^{k}=\{\varphi_{h} : \varphi_{h}\big|_{K} \in \mathbb{P}^{k}(K), \, \, \forall\ K\in D \}, \end{equation} where $\mathbb{P}^{k}$ is the space of polynomials in $x$ of maximal degree $k$. The approximation to the moments, $\mathbf{u}_{h}$, is then expressed as \begin{equation} \mathbf{u}_{h}(x,t)=\sum_{i=1}^{k+1}\mathbf{u}_{i}(t)\,P_{i}(x), \end{equation} where each $P_{i}\in\mathbb{V}_{h}^{k}$ and each $\mathbf{u}_{i}$ is a two-component vector representing the unknowns per element in the DG method. Then, for any $x \in D$ and any $\varphi_{h} \in \mathbb{V}_{h}^{k}$, the semi-discrete DG method is as follows: \textit{Find $\mathbf{u}_{h} \in \mathbb{V}_{h}^{k}$ such that} \begin{align} \int_{K}\pd{\mathbf{u}_{h}}{t}\,\varphi_{h}\,dx &+\big[\,\widehat{\mathbf{f}(\mathbf{u}_{h})}(x_{\textnormal{\tiny\textsc{H}}})\,\varphi_{h}(x_{\textnormal{\tiny\textsc{H}}}^{-})-\widehat{\mathbf{f}(\mathbf{u}_{h})}(x_{\textnormal{\tiny\textsc{L}}})\,\varphi_{h}(x_{\textnormal{\tiny\textsc{L}}}^{+})\,\big] \nonumber \\ &-\int_{K}\mathbf{f}(\mathbf{u}_{h})\,\pd{\varphi_{h}}{x}\,dx =\int_{K}\mathbf{c}(\mathbf{u}_{h})\,\varphi_{h}\,dx \label{eq:twoMomentDG} \end{align} holds for all $\varphi_{h}\in\mathbb{V}_{h}^{k}$ and all $K\in D$. In Eq.~\eqref{eq:twoMomentDG}, \begin{equation} \widehat{\mathbf{f}(\mathbf{u}_{h})}(x_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}) =\mathbf{h}\big(\mathbf{u}_{h}(x_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-}),\mathbf{u}_{h}(x_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+})\big) \label{eq:twoMomentNumericalFluxFunction} \end{equation} is a numerical flux, where $\mathbf{h}$ is a numerical flux function. In the DG method, any standard numerical flux designed for hyperbolic conservation laws can be used. However, \citet{ChEnHa19} used the global Lax-Friedrichs flux, where \begin{equation} \mathbf{h}\big(\mathbf{u}_{h}(x_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-}),\mathbf{u}_{h}(x_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+})\big) =\f{1}{2} \Big[ \mathbf{f}\big(\mathbf{u}_{h}(x_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-})\big)+\mathbf{f}\big(\mathbf{u}_{h}(x_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+})\big) -\big(\mathbf{u}_{h}(x_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{+})-\mathbf{u}_{h}(x_{\textnormal{\tiny\textsc{H}}/\textnormal{\tiny\textsc{L}}}^{-})\big) \Big]. \label{eq:realizableTwoMomentNumericalFlux} \end{equation} It should be noted that when using the DG method for radiation transport, as long as the approximation space includes at least linear elements, it is not necessary to switch between centered and upwind-type fluxes (e.g., as is done in Eqs.~\eqref{eq:modifiedNumericalFluxEnergy}--\eqref{eq:modifiedNumericalFluxMomentum} for finite-volume and finite-difference methods to capture both the streaming and diffusive regimes). As such, the DG spatial discretization is naturally structure-preserving with respect to the diffusion limit, and well-suited for radiation transport \citep[e.g.,][]{LaMo89,Adams01}. In fact, the dissipation term in the numerical flux in Eq.~\eqref{eq:realizableTwoMomentNumericalFlux}, which is not present in the diffusive regime when employing switching between centered and upwind fluxes, plays an important role in the proof of the realizability-preserving property of the two-moment method presented here. It may therefore be difficult, if not impossible, to design realizability-preserving methods for the two-moment model without this term. Note that in the diffusion limit, $|\mathcal{H}|\ll\mathcal{J}$, the moment vector $\mathbf{u}$ is close to the line connecting $(0,0)$ and $(0,1)$ in Figure~\ref{fig:ChEnHa19_Fig1}. Then, if the particle density is low ($\mathcal{J}\ll1$) the moment vector is safely inside $\mathcal{R}$. On the other hand, if the particle density is high ($\mathcal{J}\lesssim1$), which, e.g., is the case for electron neutrinos in the supernova core, the moment vector is dangerously close to the boundary of $\mathcal{R}$, and care is needed in order to maintain $\mathbf{u}\in\mathcal{R}$. Further away from the supernova core, where neutrinos transition to streaming conditions, $|\mathcal{H}|\lesssim(1-\mathcal{J})\,\mathcal{J}$ ($\approx\mathcal{J}$ when $\mathcal{J}\ll1$), the moment vector is again close to the boundary of $\mathcal{R}$, and care in the numerics is again warranted. Maintaining $\mathbf{u}\in\mathcal{R}$ is necessary to ensure the well-posedness of the moment closure procedure \citep{Le96,Ju98,HaLeTi08}. Realizability-preserving methods maintain $\mathbf{u}\in\mathcal{R}$ and thus improve robustness. The semi-discretization of the two-moment model in Eq.~\eqref{eq:twoMomentDG} results in a system of ODEs of the form \begin{equation} \deriv{\mathbf{U}}{t} = \mathbf{T}(\mathbf{U}) + \mathbf{C}(\mathbf{U}), \label{eq:realizableTwoMomentODE} \end{equation} where $\mathbf{U}$ represents all the degrees of freedom evolved with the DG method, \begin{equation} \mathbf{U} =\Big\{\, \int_{K}\pd{\mathbf{u}_{h}}{t}\,\varphi_{h}\,dx \,\Big\}_{K\in D, \varphi_{h}\in\mathbb{V}_{h}^{k}}, \end{equation} which includes the cell-average of $\mathbf{u}_{h}$ in each element: \begin{equation} \mathbf{u}_{K} = \f{1}{\Delta x}\int_{K}\mathbf{u}_{h}\,dx. \label{eq:twoMomentBasicCellAverage} \end{equation} In Eq.~\eqref{eq:realizableTwoMomentODE}, the transport operator $\mathbf{T}(\mathbf{U})$ corresponds to the second (surface) and third (volume) terms on the left-hand side of Eq.~\eqref{eq:twoMomentDG}, while the collision operator $\mathbf{C}(\mathbf{U})$ corresponds to the right-hand side of Eq.~\eqref{eq:twoMomentDG}. To evolve Eq.~\eqref{eq:realizableTwoMomentODE} forward in time, \citet{ChEnHa19} developed IMEX schemes, where the transport operator is treated explicitly and the collision operator is treated implicitly. As discussed in Sect.~\ref{sec:boundPreserving}, the extension of the bound-preserving property to high-order methods relies on the strong-stability-preserving (SSP) property of the ODE solver. Explicit SSP Runge--Kutta (RK) methods of moderate order ($\le3$) are relatively easy to construct. Unfortunately, high-order (second- or higher-order temporal accuracy) SSP-IMEX methods \emph{with time step restrictions solely due to the explicit transport operator} do not exist (see for example Proposition~6.2 in \citet{GoShTa01}, which rules out the existence of implicit SSP-RK methods of order higher than one). Because of this, \citet{ChEnHa19} resorted to develop formally first-order accurate IMEX schemes with the following properties: (i) second-order accurate in the streaming limit, (ii) SSP (called convex-invariant in \citet{ChEnHa19}), with a time step restriction solely due to the explicit part, and (iii) well-behaved in the diffusion limit in the sense that the flux density remains proportional to the gradient of the number density with the correct constant of proportionality. The optimal scheme, in the sense that it is SSP with the same timestep as the forward Euler scheme applied to the explicit part, is given by \begin{align} \mathbf{U}^{(1)} &= \Lambda_{\mathcal{R}}\Big\{\mathbf{U}^{n} + \Delta t\,\mathbf{T}(\mathbf{U}^{n})\Big\}, \label{eq:PDARS1} \\ \widetilde{\mathbf{U}}^{(2)} &=\mathbf{U}^{(1)} + \Delta t\,\mathbf{C}(\widetilde{\mathbf{U}}^{(2)}); \quad \mathbf{U}^{(2)}=\Lambda_{\mathcal{R}}\Big\{\widetilde{\mathbf{U}}^{(2)}\Big\}, \label{eq:PDARS2} \\ \mathbf{U}^{(3)} &= \Lambda_{\mathcal{R}}\Big\{\mathbf{U}^{(2)} + \Delta t\,\mathbf{T}(\mathbf{U}^{(2)})\Big\}, \label{eq:PDARS3} \\ \widetilde{\mathbf{U}}^{n+1} &= \f{1}{2}\big(\,\mathbf{U}^{n} + \mathbf{U}^{(3)}\,\big) + \f{1}{2}\Delta t\,\mathbf{C}(\widetilde{\mathbf{U}}^{n+1}); \quad \mathbf{U}^{n+1}=\Lambda_{\mathcal{R}}\Big\{\widetilde{\mathbf{U}}^{n+1}\Big\}. \label{eq:PDARS4} \end{align} This IMEX scheme involves two explicit evaluations of the transport operator and two implicit solves to evaluate the collision operator. The explicit stages, Eqs.~\eqref{eq:PDARS1} and \eqref{eq:PDARS3}, are forward Euler steps, while the implicit stages, Eqs.~\eqref{eq:PDARS2} and \eqref{eq:PDARS4}, can be viewed as backward Euler steps. Without collisions ($\mathbf{C}=0$), the scheme reduces to the optimal second-order accurate SSP-RK scheme of \citet{ShOs88} (also referred to as Heun's method). Although the scheme is formally only first-order accurate in time when collisions are frequent, quantities evolve on a diffusive time scale in this case, which is much longer than the time step restriction required for stability of the explicit part. Therefore, temporal discretization errors remain small. On the other hand, second-order accuracy in the streaming limit is essential in maintaining non-oscillatory radiation solutions with the DG method in the streaming regime. In Eqs.~\eqref{eq:PDARS1}--\eqref{eq:PDARS4}, $\Lambda_{\mathcal{R}}$ is a realizability-enforcing limiter used to enforce point-wise realizability within each element. The limiter, which we describe in more detail below, assumes that the cell-average is realizable after each step. We begin by finding sufficient conditions for realizability-preservation of the cell-average in each step. For this purpose, since the remaining steps are equivalent, we consider only the explicit step in Eq.~\eqref{eq:PDARS1} and the implicit step in Eq.~\eqref{eq:PDARS2}. For an explicit forward Euler update, as in Eq.~\eqref{eq:PDARS1}, the equation for the cell-averaged moments (obtained from Eq.~\eqref{eq:twoMomentDG} with $\varphi_{h}=1$) is given by \begin{equation} \mathbf{u}_{\mathbf{K}}^{(1)} = \mathbf{u}_{\mathbf{K}}^{n} - \f{\Delta t}{\Delta x}\big[\,\widehat{\mathbf{f}(\mathbf{u}_{h}^{n})}(x_{\textnormal{\tiny\textsc{H}}})-\widehat{\mathbf{f}(\mathbf{u}_{h}^{n})}(x_{\textnormal{\tiny\textsc{L}}})\,\big]. \label{eq:twoMomentCellAverage} \end{equation} To construct a realizability-preserving explicit update for the two-moment model, one seeks to find sufficient conditions such that $\mathbf{u}_{\mathbf{K}}^{(1)}\in\mathcal{R}$. The strategy is very similar to that taken for the bound-preserving scheme discussed in Sect.~\ref{sec:boundPreserving}. To evaluate the integral on the right-hand side of Eq.~\eqref{eq:twoMomentCellAverage} (cf.\ Eq.~\eqref{eq:twoMomentBasicCellAverage}), an $N$-point Gauss-Lobatto quadrature rule is used on the interval $K$, with points \begin{equation} \hat{S} = \big\{\,x_{\textnormal{\tiny\textsc{L}}}=\hat{x}_{1},\ldots,\hat{x}_{N}=x_{\textnormal{\tiny\textsc{H}}}\,\big\}, \label{eq:twoMomentLobattoPoints} \end{equation} and weights $\hat{w}_{q}\in(0,1]$, normalized such that $\sum_{q=1}^{N}\hat{w}_{q}=1$. Using this quadrature and the numerical flux function in Eq.~\eqref{eq:twoMomentNumericalFluxFunction}, one can write Eq.~\eqref{eq:twoMomentCellAverage} as \begin{align} \mathbf{u}_{\mathbf{K}}^{(1)} &= \sum_{q=1}^{N}\hat{w}_{q}\,\mathbf{u}_{h}^{n}(\hat{x}_{q}) -\f{\Delta t}{\Delta x}\big[\,\mathbf{h}\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{-}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{+})\big)-\mathbf{h}\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{-}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{+})\big)\,\big] \nonumber \\ &= \sum_{q=2}^{N-1}\hat{w}_{q}\,\mathbf{u}_{h}^{n}(\hat{x}_{q}) + (\hat{w}_{1}+\hat{w}_{N})\,\Phi\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{-}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{+}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{-}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{+})\big), \label{eq:twoMomentCellAveragePhi} \end{align} which is a convex combination of $\{\mathbf{u}_{h}^{n}(\hat{x}_{q})\}_{q=2}^{N-1}$ and $\Phi$. (Note that $\hat{w}_{1}=\hat{w}_{N}$, so that $2\,\hat{w}_{1}=2\,\hat{w}_{N}=\hat{w}_{1}+\hat{w}_{N}$.) Thus, if, for each element $K$, $\mathbf{u}_{h}^{n}(\hat{x}_{q})\in\mathcal{R},\forall q=2,\ldots,N-1$ and $\Phi\in\mathcal{R}$, since the set $\mathcal{R}$ is convex it follows that $\mathbf{u}_{\mathbf{K}}^{(1)}\in\mathcal{R}$. In Eq.~\eqref{eq:twoMomentCellAveragePhi}, \begin{align} &\Phi\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{-}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{+}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{-}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{+})\big) \nonumber \\ &= \f{1}{2}\big[\,\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{+})+\lambda\,\mathbf{h}\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{-}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{+})\big)\,\big] + \f{1}{2}\big[\,\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{-})-\lambda\,\mathbf{h}\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{-}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{+})\big)\,\big] \nonumber \\ &=(1-\lambda)\,\Phi_{0} + \f{1}{2}\,\lambda\,\Phi_{1} + \f{1}{2}\,\lambda\,\Phi_{2}, \label{eq:twoMomentPhi} \end{align} where $\lambda=\Delta t/(\Delta x\,\hat{w}_{1})=\Delta t/(\Delta x\,\hat{w}_{N})$ and \begin{align} \Phi_{0} &=\f{1}{2}\,\Big[\,\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{+})+\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{-})\,\Big], \\ \Phi_{1} &=\f{1}{2}\,\Big[\,\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{-})+\mathbf{f}\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{-})\big)\,\Big] + \f{1}{2}\,\Big[\,\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{-})-\mathbf{f}\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{-})\big)\,\Big] , \\ \Phi_{2} &=\f{1}{2}\,\Big[\,\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{+})+\mathbf{f}\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{+})\big)\,\Big] +\f{1}{2}\,\Big[\,\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{+})-\mathbf{f}\big(\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{+})\big)\,\Big]. \end{align} In the last line in Eq.~\eqref{eq:twoMomentPhi}, if $\lambda\le1$, $\Phi$ is expressed as a convex combination of $\Phi_{0}$, $\Phi_{1}$, and $\Phi_{2}$. Thus, if $\Phi_{0},\Phi_{1},\Phi_{2}\in\mathcal{R}$, the time step restriction \begin{equation} \Delta t\le\hat{w}_{N}\,\Delta x \end{equation} is sufficient to guarantee $\mathbf{u}_{\mathbf{K}}^{(1)}\in\mathcal{R}$. The condition $\Phi_{0}\in\mathcal{R}$ follows from the assumption $\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{+}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{-})\in\mathcal{R}$, while the conditions $\Phi_{1},\Phi_{2}\in\mathcal{R}$ follow from the additional assumptions $\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{L}}}^{-}),\mathbf{u}_{h}^{n}(x_{\textnormal{\tiny\textsc{H}}}^{+})\in\mathcal{R}$ and Lemma~2 in \citet{ChEnHa19}, which proves $\Phi_{1},\Phi_{2}\in\mathcal{R}$ provided these expressions can be generated from distributions $f\in\mathfrak{R}$. We note that for Lemma~2 in \citet{ChEnHa19} to hold in the current setting, the moments must be consistent with a distribution function satisfying $0\le f\le1$, which demands a two-moment closure based on Fermi--Dirac statistics (the second component of $\Phi_{1}$ and $\Phi_{2}$ involves the Eddington factor). The maximum entropy closures of \citet{CeBl94,LaBa11} and the Kershaw-type closure of \citet{BaLa17} are suitable. On the other hand, the Minerbo, M1, and Kershaw closures discussed in Sect.~\ref{sec:closure} are based on positive distribution functions (with no upper bound), and are therefore not suitable if $\mathbf{u}\in\mathcal{R}$ is desired. These closures are only compatible with the relaxed condition $\mathbf{u}\in\mathcal{R}^{+}$. (In this case the approach discussed here, with minor modifications, is still applicable; e.g., see \citet{OlHaFr12} for a method with explicit time stepping.) For the implicit solve in Eq.~\eqref{eq:PDARS2}, the cell-average with backward Euler gives \begin{equation} \mathbf{u}_{\mathbf{K}}^{(2)} = \big(\,I+\Delta t\,R\,\big)^{-1}\big(\,\mathbf{u}_{\mathbf{K}}^{(1)}+\Delta t\,\mathbf{\eta}\,\big). \label{eq:twoMomentCellAverageImplicit} \end{equation} Here it is assumed that the opacity is constant within each element. The first component of Eq.~\eqref{eq:twoMomentCellAverageImplicit} is then \begin{equation} \mathcal{J}_{\mathbf{K}}^{(2)} = \f{\mathcal{J}_{\mathbf{K}}^{(1)} + \Delta t\,\sigma_{A}\,\mathcal{J}_{0,\mathbf{K}}}{1+\Delta t\,\sigma_{A}}. \label{eq:chuCollisionMomentComponentOne} \end{equation} Since $\mathcal{J}_{\mathbf{K}}^{(1)},\mathcal{J}_{0,\mathbf{K}}\in(0,1)$, it follows that $\mathcal{J}_{\mathbf{K}}^{(2)}\in(0,1)$. The second component of Eq.~\eqref{eq:twoMomentCellAverageImplicit} is \begin{equation} \mathcal{H}_{\mathbf{K}}^{(2)} = \f{\mathcal{H}_{\mathbf{K}}^{(1)}}{1+\Delta t\,\sigma_{T}}. \label{eq:chuCollisionMomentComponentTwo} \end{equation} Then, Lemma~3 in \citet{ChEnHa19}, which considers the moments in Eqs.~\eqref{eq:chuCollisionMomentComponentOne} and \eqref{eq:chuCollisionMomentComponentTwo}, shows that $|\mathcal{H}_{\mathbf{K}}^{(2)}|<(1-\mathcal{J}_{\mathbf{K}}^{(2)})\,\mathcal{J}_{\mathbf{K}}^{(2)}$, so that $\mathbf{u}_{\mathbf{K}}^{(2)}\in\mathcal{R}$. Note that this assumes a very simple form of the collision operator (i.e., emission, absorption, and isotropic and isoenergetic scattering). For more complicated collision operators with anisotropic kernels, energy coupling interactions, and Pauli blocking factors it can become very difficult to prove that realizability of the cell-average is preserved in the implicit solve, and this must be investigated separately for each neutrino--matter interaction type. Moreover, the ability to prove results rigorously may then depend on the implicit solver used. The update in Eq.~\eqref{eq:twoMomentCellAverage} requires that for each element the polynomial approximation is realizable in each point in the quadrature set $\hat{S}$ in Eq.~\eqref{eq:twoMomentLobattoPoints}. Thus, after each stage in the time stepping algorithm in Eqs.~\eqref{eq:PDARS1}-\eqref{eq:PDARS4}, a limiter is applied in preparation for the next. Let the unlimited solution after any of the stages be $\widetilde{\mathbf{u}}_{h}=\big(\widetilde{\mathcal{J}}_{h},\widetilde{\mathcal{H}}_{h}\big)^{T}$. Following \citet{ZhSh10a}, a limiter from \citet{LiOs96} is first used to enforce the bounds on the zeroth moment $\widetilde{\mathcal{J}}_{h}$. We replace the polynomial $\widetilde{\mathcal{J}}_{h}(x)$, the first component of $\widetilde{\mathbf{u}}_{h}$, with the limited polynomial \begin{equation} \widehat{\mathcal{J}}_{h}(x) =\vartheta_{1}\,\widetilde{\mathcal{J}}_{h}(x)+(1-\vartheta_{1})\,\mathcal{J}_{\mathbf{K}}, \label{eq:limitDensity} \end{equation} where the limiter parameter $\vartheta_{1}$ is given by \begin{equation} \vartheta_{1} =\min\Big\{\,\Big|\f{M-\mathcal{J}_{\mathbf{K}}}{M_{\hat{S}}-\mathcal{J}_{\mathbf{K}}}\Big|,\Big|\f{m-\mathcal{J}_{\mathbf{K}}}{m_{\hat{S}}-\mathcal{J}_{\mathbf{K}}}\Big|,1\,\Big\}, \end{equation} with $m=\delta$ and $M=1-\delta$, where $\delta$ is some small number (e.g., $10^{-16}$), and \begin{equation} M_{\hat{S}}=\max_{x\in\hat{S}}\mathcal{J}_{h}(x) \quad\text{and}\quad m_{\hat{S}}=\min_{x\in\hat{S}}\mathcal{J}_{h}(x). \end{equation} This step, which ensures $\widehat{\mathcal{J}}_{h}\in(0,1)$, corresponds to the bound-enforcing limiter described in Sect.~\ref{sec:boundPreserving}. After this step, we denote $\widehat{\mathbf{u}}_{h}=\big(\widehat{\mathcal{J}}_{h},\widetilde{\mathcal{H}}_{h}\big)^{T}$. The next step is to enforce $\gamma(\widehat{\mathbf{u}}_{h})\equiv(1-\widehat{\mathcal{J}}_{h})\,\widehat{\mathcal{J}}_{h}-|\widetilde{\mathcal{H}}_{h}|>0$ for all $x\in\hat{S}$, which follows a procedure similar to that developed by \citet{ZhSh10b} to ensure positivity of the pressure when solving the Euler equations of gas dynamics. If $\widehat{\mathbf{u}}_{h}$ is outside $\mathcal{R}$ for any quadrature point $x\in\hat{S}$, i.e., $\gamma(\widehat{\mathbf{u}}_{h})<0$, since $\mathbf{u}_{\mathbf{K}}\in\mathcal{R}$, there exists an intersection point of the straight line $\mathbf{s}_{q}(\psi)$, connecting $\mathbf{u}_{\mathbf{K}}$ and $\widehat{\mathbf{u}}_{h}$ evaluated in the troubled quadrature point $x_{q}$ (denoted $\widehat{\mathbf{u}}_{q}$), and the boundary of $\mathcal{R}$. This line is parameterized by \begin{equation} \mathbf{s}_{q}(\psi)=\psi\,\widehat{\mathbf{u}}_{q}+(1-\psi)\,\mathbf{u}_{\mathbf{K}}, \end{equation} where $\psi\in[0,1]$. The intersection point $\psi_{q}$ is obtained by solving $\gamma(\mathbf{s}_{q}(\psi))=0$ for $\psi$. (In practice, $\psi$ needs not be accurate to many significant digits, and a bisection algorithm terminated after a few iterations is sufficient.) This completes the description of major steps in the scheme presented in \citet{ChEnHa19}. \subsection{Hybrid Methods} From the preceding sections, it is clear that the landscape of approaches to neutrino transport, and the associated numerical methods, is growing rapidly. One- and two-moment models have reached a level of maturity where general relativistic core-collapse supernova modeling is feasible \citep[e.g.,][]{KuTaKo16,RaJuJa19}. Multidimensional models with Boltzmann neutrino transport -- e.g., using discrete ordinate or Monte Carlo methods -- are also under development and results in axial symmetry have already been published \citep{NaIwFu17}, but more work is needed to reach the same level of maturity as found in moments-based models. One primary reason is, of course, the computational cost associated with transport models that provide better resolution of the angular dimensions of momentum space, such as Boltzmann models. In particular, the computational cost of the neutrino--matter coupling problem increases dramatically with increased fidelity in this sector. However, the multiscale nature of the neutrino transport problem implies that Boltzmann neutrino transport is probably not necessary everywhere in a simulation. On the one hand, the radiation field is well captured by the low-order moment models in the collision dominated region below the neutrinospheres. On the other hand, higher-fidelity models may be warranted in the gain region since heating rates are sensitive to the angular shape of the neutrino distributions. (There is already some evidence that two-moment closures are unable to capture certain details in the radiation field; e.g., \cite{HaNaIw19}.) This motivates the use of hybrid methods, which, for example, aim to combine low- and high-fidelity approaches in order to provide sufficient resolution where needed, but at a reduced computational cost. Hybrid approaches are used in many areas of computational physics, but are not widely adopted to model neutrino transport in core-collapse supernovae. We note that the variable Eddington factor (VEF) method of \citet{RaJa02}, which has been shown to compare well with Boltzmann neutrino transport in spherical symmetry \citep{LiRaJa05}, can be regarded as a hybrid method, where a simplified (and less computationally expensive) Boltzmann solver is used in the context of a two-moment model to provide the moment closure. Adopting hybrid methods to model neutrino transport in multidimensional models is a potentially rewarding direction for near-future research, and some approaches may even be able to leverage investments in capabilities that have already been developed. Since these methods have not fully found their way into the core-collapse supernova modeling community, we will not go into details, but rather briefly mention some existing work, which in most cases will require further development to account for relativity and domain-specific microphysics details. We hope to report more on this interesting field in the future. So-called high-order--low-order (HOLO) approaches \citep[see, e.g., review by][]{ChChKn17} are one type of hybrid method gaining popularity for use in radiation transport (and related) applications, and combine, as the name suggests, high-fidelity solvers for the (Boltzmann) transport equation with lower-fidelity solvers (typically based on one- or two-moment models, and commonly in a gray formulation) to accelerate the process of solving the high-fidelity model --- in particular, the nonlinear coupling between radiation and a material background. In these applications, the radiation field is governed by a kinetic model, while the material is governed by a fluid-like model (as in the core-collapse supernova problem). The basic idea is that, in the collisional regime, the interaction between the kinetic and fluid components occurs in a low-dimensional subspace where only a few moments of the particle distribution function are needed to accurately capture the coupling. Thus, HOLO methods are effective primarily in regions where the particle mean free path is small and the problem is stiff, and one challenge is to ensure consistency between the two model components. Recent work on HOLO methods applied to the problem of thermal radiative transfer include applications where the high-order model is solved with continuum methods such as discrete ordinates \citep[e.g.,][]{PaKnRa12,PaKnRa13,LoMoGe19} or Monte Carlo methods \citep[e.g.,][]{PaKnRa14,BoClMo17}. We also point out related work on solving the linear transport equation (i.e., without nonlinear coupling to the material) with HOLO (or hybrid) methods by \cite{HaMc13,WiKeKn13,WiPaTa15,CrChGa17,CrChGa19,CrChHa20}. \section{Solution methods} When ultimately expressed in computer code, all of the previously discussed deterministic methods require the use of implicit numerical methods. When discretized, the transport equations produce a set of nonlinear algebraic equations. When linearized, these equations in turn lead to linear systems of equations that relate the values of the change in the distribution functions (or moments of the distribution functions) to the neutrino--matter and neutrino--neutrino interactions encoded in the terms on the right-hand side of the equations: the collision term. These source terms depend on the changes in the neutrino radiation field, as well, giving rise to the need for implicit methods. The solution of these linear systems is associated with the dominant computational cost for any deterministic method for neutrino transport. The remainder of the panoply of physics that complete a core-collapse supernova model---hydrodynamics, nuclear kinetics, and even the global solution of the gravitational field---are typically associated with much less computational intensity and often require significantly less memory capacity and bandwidth. Because the solution techniques for the transport linear system solve depend on almost every important dimension of modern computer platforms---floating point performance, memory bandwidth, communication bandwidth and latency---the particulars of individual platforms become an important consideration when a practitioner looks to instantiate a real implementation in the form of a production code. Therefore, the structural components of modern computers and the quantitative requirements for realistic modeling of transport are inextricably linked together when one looks to build a neutrino radiation hydrodynamics code. \begin{figure}[htb] \includegraphics[width=\textwidth]{linearSystemFig.pdf} \caption{A schematic of the structure of a typical neutrino transport linear system that must be solved at each time step. The diagonal, dense blocks are generally non-symmetric and have characteristic substructure arising from the coupling in angle, energy, isospin (i.e. between neutrinos and antineutrinos), and neutrino flavor, though the particulars of that structure are dependent on the lexical ordering of the solution vector. Fully implicit methods also couple individual spatial zones to one another, producing a linear system that contains a series of outlying bands in addition to the diagonally dominant dense block structure. This global linear system typically requires considerable communication on parallel platforms, where domain decomposition is often used to spread the spatial extent of the problem across the distributed memory space. IMEX methods do not require solution of this global system, but the inversion of a similarly structured set of dense blocks is required at each spatial index. However, this reduction of the implicit problem to a purely local operation can result in considerable performance advantages. } \label{fig:linearSystemFig} \end{figure} \subsection{Simulation requirements} Regardless of the particulars of the architecture enlisted to solve the requisite equations, the computational demands of neutrino radiation hydrodynamics are prodigious. Some of these demands are imposed directly by the high dimensionality of the transport equation itself. The need to discretize the neutrino phase space with adequate resolution to capture the particulars of the neutrino--matter interactions (cf.\ Sect.~\ref{sec:interactions}) results in energy resolutions that are typically on the order of dozens of groups. This requirement is amplified by the need to spatially resolve matter features in the flow that are of roughly the size of the neutrino mean free path at various points in the computational domain. Adaptive mesh refinement (AMR) can help ameliorate the need to refine the grid everywhere to resolve the shortest mean free paths, but this reduction is typically only partially effective. Indeed, the time-dependent nature of the core-collapse supernova problem often leads to much of the grid having to be refined as the reheating and explosion epochs evolve. These resolution requirements directly impact the size of the linear systems that must be solved via deterministic methods, typically resulting in quadratic growth in the size of the system for increases in any given phase space dimension. Therefore, the product of required energy resolution, spatial resolution, number of neutrino flavors and their distribution functions or their angular moments directly translates into a need for \emph{scalable} implementations of the solution algorithms. Any implementation needs to be able to effectively take advantage of any future platform. This type of scalability is typically termed weak scaling. The figure of merit for weak scaling is how close to a constant runtime can be achieved as the computational load is increased commensurately with the amount of resources. For example, as problem size is increased along with the number of MPI ranks used in a simulation, good weak scalability is achieved if the runtime remains constant. Weak scalability is often highly dependent on effective distributed-memory parallelism, including possibly overlapping slow inter-node communication with on-node computation. However, this is a necessary, but not sufficient, condition for effective investigation. The resultant simulations must also be capable of execution in reasonable amounts of wall-clock time. Runtimes of several months are untenable if one wishes to explore a more-or-less complete set of supernova progenitors. Therefore, reducing the wall-clock time for transport computations is equally important. This so-called strong scalability is achievable if node-level execution is made faster. On modern platforms, this has very much become a question of the effective use of hybrid-node architectures. \subsection{Implementation on heterogeneous architectures} Currently, the most widely available and performant microarchitectures are based on graphical processing units (GPUs). As suggested by their name, GPUs were originally designed to handle computer graphics-intensive tasks in applications ranging from scientific visualization to video games. However, the very high intensity with which they compute and their relatively low power-consumption traits (as compared to modern CPUs) led to their adoption as engines for a variety of scientific computing tasks. Indeed, at this writing, GPU-based architectures dominate much of the highest-end HPC platforms, and \emph{all} planned near-future exascale platforms will employ GPUs as the primary source of compute power. The primary characteristic that provides the compute power of modern GPUs is the large number of compute cores, as compared to traditional CPUs. Modern GPUs (e.g. the NVIDIA V100) contain more than 5000 cores, compared to the few dozen that are present on contemporary CPUs. Each core may have a relatively low clock speed compared to a CPU, but the sheer number of processors available on a GPU leads to a much higher intensity of computation. The architecture of the GPUs is wholly shaped by the single-instruction, multiple-data (SIMD) execution model. In this execution model, each execution unit takes as input two vectors, performs identical operations on both sets of operands (one operand from each vector), and produces a resultant vector. Modern CPUs also typically contain SIMD units: MMX, SSE, and AVX instructions are available on Intel architectures, and POWER and ARM architectures have similar extensions to execution sets to take advantage of similar units. In the case of GPUs, however, these instructions are essentially the only ones available, restricting the amount of branching and conditional execution that can be effectively carried out by the device. All modern GPU architectures make use of a similar set of hardware components and associated software abstractions. Here, we will primarily make use of the nomenclature used by NVIDIA to describe their GPU devices, but other vendors make use of virtually identical concepts and constructions, albeit with slightly different naming. In all cases, \emph{kernels} are launched on the device as a set of \emph{threads}. Each of these threads executes a single SIMD pipeline. Within the kernel launch of threads, threads are grouped into a number of \emph{blocks}. These thread blocks are mapped to individual \emph{streaming multiprocessors (SMs)}. Each SM executes threads in groups of parallel threads termed a warp (the number of threads in a warp, or wave, is typically some multiple of 32). Inside each warp, a single, common instruction is executed during a clock cycle. This lockstep execution can be broken by conditionals (e.g. if-then-else instructions). When this occurs, the effect of this \emph{thread divergence} within a warp breaks the parallelization of the warp. The execution on the conditional thread continues in a serial fashion, and all the other threads are stalled. This execution model is further complicated by the hierarchical memory on GPUs. Global memory is accessible by all cores. This global memory is typically several GBs on each device. The bandwidth of this memory is often termed high-bandwidth, as it typically has bandwidths several times that for DRAM that might be attached to the CPU host. Closer to each multiprocessor there is a \emph{shared memory} that offers a space accessible to all cores inside the multiprocessor. It is typically used as a user-managed cache of the global memory. The bandwidth to this cache is typically much faster than fetching addresses from the global memory for each core. Ultimately, each core has a certain number of registers that provide the greatest memory bandwidth, but, concomitantly, have the smallest capacities. Programming GPUs relies on providing as many operands as possible at the maximum possible rate to all of the SMs on a device. The complexity of the memory hierarchy, the execution model, and the possibility of thread divergences can make this a formidable programming task. Several programming models have been introduced to program GPUs. These include: \begin{enumerate} \item CUDA: a minor extension of C/C++ for GPU thread programming. CUDA is a proprietary programming model created and supported by NVIDIA. \item ROCm: an extension of C/C++, much like CUDA in purpose and syntax. ROCm was created by AMD and is Open Source. \item OpenCL: a multi-vendor standard. OpenCL is designed to work on a wide variety of platforms, not just GPUs. This makes the model very powerful, but also introduces a measure of irreducible complexity to accommodate this power. \item OpenACC: A directive-based approach to GPU programming, OpenACC uses code decoration much like OpenMP or other directives-based models. OpenACC provides a straightforward path for GPU programming in Fortran. \item OpenMP (with offload): Modern OpenMP standards include a set of extensions to provide facilities for thread-level programming on GPU devices. \end{enumerate} The choice for any programmer between these options depends on the code to be produced and the relative agility of the development team. For neutrino radiation transport, the Oak Ridge group, for example, has chosen to work primarily in Fortran, with OpenMP directives to marshal the GPUs. This approach allows them to extend legacy code (in Fortran) in a straightforward and performant manner. Using OpenMP provides them with a measure of platform independence, as it is the only programming model currently envisaged to be supported on all major GPU hardware (i.e. NVIDIA, AMD, and Intel devices). The partial loss of thread-level control ceded by not using a more low-level model like CUDA or ROCm is not so important for radiation transport, as the vectorized computational kernels produced in evaluation of both the left and right-hand sides of the transport equation provide plenty of floating-point operations to saturate any modern GPU streaming multiprocessor. Therefore, decorating the multi-level loop nests that contain these vectorized operations at their deepest levels with directives is an effective model. In addition, this programming model can be effectively and easily extended with GPU-enabled scientific libraries (e.g., the GPU-accelerated version of BLAS), regardless of the model used by those libraries internally. Many computational radiation transport practitioners have moved to Monte Carlo (MC) approaches in recent years, driven to this choice by the relative abundance of compute power available on GPUs. However, these approaches are not without complexities on GPUs, as the widely disparate sizes of the memory spaces described above (i.e., GBs to kBs to bytes as one moves from global memory to shared memory to registers) mean that MC histories are not so simply preserved. These complications mean that the relative expense of Monte Carlo methods (cf.\ Sect.~\ref{sec:MC}) cannot be fully ameliorated by porting to GPUs. Because the dense linear algebra underpinning their implementations do make effective use of GPU compute architectures, IMEX and discrete ordinates approaches have the potential to compete with MC approaches with reduced memory footprint. But, this strong reliance on a single class of numeric operations means that the success of these approaches is almost wholly dependent on the performance of linear algebra subprograms on GPUs. This is especially true for so-called batched execution of the solution of linear systems of equations, wherein several matrices and right-hand sides are computed by a single kernel invocation and the solver effectively divides the work among SMs. \section{Summary and outlook} The last decade has seen considerable, and accelerated, progress made on multiple fronts: (1) Ascertaining the explosion mechanism of core-collapse supernovae. (2) The development of the theory of general relativistic neutrino radiation hydrodynamics. (3) The development of robust numerical methods for the solution of the neutrino radiation hydrodynamics equations in core-collapse supernova environments. (4) And the application of these methods in increasingly sophisticated three-dimensional core-collapse supernova models. At this point, it is fair to say that we are theory and methods rich and that the frontier lies more in the application of these methods in three-dimensional core-collapse supernova models, although further method development is certainly needed. Three-dimensional, fully general relativistic models with all of the relevant neutrino physics in multi-frequency one- or two-moment approaches are on the horizon, the leading examples of which are documented in the work of \citet{KuTaKo16,RoOtHa16,RaJuJa19}. But counterpart models in three dimensions using Boltzmann neutrino transport are farther off, though here too there is a leading example in the work of \citet{NaIwFu17}. Adding a new dimension to the discussion, three-dimensional Boltzmann-based models are limited right now more by supercomputing capabilities than anything else. We have documented both moments and Boltzmann approaches here that have been developed and used by multiple research groups. Boltzmann approaches have been used in core-collapse supernova models with reduced spatial dimensionality and have served to gauge moments approaches in multidimensional models for some time. Recent developments emphasize even more the need for Boltzmann-based models. The history of core-collapse supernova theory has seen quantum leaps on a number of occasions over the past more than fifty years, often associated with an increased glimpse of the rich physics that drive such supernovae. In the past five years, evidence has mounted that neutrino quantum effects---specifically, due to neutrino--neutrino coupling in the proto-neutron star surface region---may impact the electron-flavor neutrino luminosities and spectra responsible for neutrino shock reheating and, consequently, may play a role in the supernova mechanism itself. These early conclusions will require the same extensive development to supplant them as has been documented here for the classical neutrino transport problem. We are far from the equivalent three-dimensional, general relativistic, full-physics models that deploy neutrino quantum kinetics. Early serious work on the implementation of neutrino quantum kinetics in supernova-like environments (e.g., see \citealt{RiMcKn19}) has illuminated yet new numerical challenges that will in turn require augmented methods, to handle both the classical and the quantum mechanical evolution of the three-flavor neutrino radiation field. In this context, then, it is very clear that a Boltzmann kinetic approach, which is a component of a complete quantum kinetics approach, must be a major step toward instantiating full neutrino quantum kinetics. We look forward to watching progress on this front and reporting on these developments as well, as they mature. The core-collapse supernova problem continues to manifest itself as a generational problem, one that will continue to serve as a fertile testbed for the development of transport and radiation hydrodynamics methods. \begin{acknowledgements} The authors would like to acknowledge extensive and fruitful discussions with Ernazar Abdikamalov, Thomas Janka, Oliver Just, Takami Kuroda, Hiroki Nagakura, Martin Obergaulinger, Nimoy Rahman, and Doug Swesty regarding their methods, as well as discussions with Cory Hauck. The authors would also like to acknowledge Robert Bollig, Marc Herant, Thomas Janka, Tobias Melson, Bernhard M\"uller, and Hiroki Nagakura for their willingness to include figures from their manuscripts in this review. AM and EE would like to acknowledge support from the National Science Foundation Gravitational Physics Theory program, through grant PHY 1806692. EE and OEBM are supported by the U.S.\ Department of Energy (DOE) Nuclear Physics and/or Advanced Scientific Computing Research programs at the Oak Ridge National Laboratory, which is supported by the Office of Science of the DOE under Contract DE-AC05-00OR22725. AM, EE, and OEBM acknowledge support from the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S.\ Department of Energy Office of Science and the National Nuclear Security Administration. AM acknowledges support from the U.S. Department of Energy, Office of Science, Office of Nuclear Physics and Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number DE-SC0018232. \end{acknowledgements}
proofpile-arXiv_059-15730
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{\label{sec:introduction} Introduction} Kohn-Sham (KS) formulation of density functional theory (DFT)\cite{PhysRev.140.A1133,jones2015density,RevModPhys.61.689} is widely used to study electronic structures of atoms, molecules, and solids because of its low computational cost and availability of easy to use software packages. The practical application of DFT requires an approximation to the exchange-correlation (XC) functional. The simplest form of the XC functional is the local spin density approximation (LSDA)\cite{PhysRev.140.A1133,PhysRevB.23.5048} which belongs to the lowest rung of ladder of the XC functionals\cite{doi:10.1063/1.1390175}. The higher rungs of the ladder contains more complex and more accurate functionals- generalized gradient approximation (GGA), meta-GGA, hybrid, and functionals that include the virtual orbitals. Practically all efforts in the functional design have been focused on improving the energetics or equilibrium properties such as atomization energies, bond distances, etc. The majority of the density functional approximations suffer from self-interaction errors (SIE) though the magnitude of error can vary from one class of functionals to another or from one parameterization to another in a given class of functional. The SIE occurs as a result of incomplete cancellation of self-Coulomb energy by the self-exchange energy of the approximate XC functional. Many failures of density functional approximations (DFAs) have been attributed to the SIE. The SIE causes the potential to decay asymptotically as $-exp(-r)$ instead of the correct $-1/r$ decay for finite neutral systems. As a result the DFAs produce errors such as too shallow eigenvalues of valence orbitals, inaccurate chemical reaction barriers, electron delocalization errors, incorrect charges on dissociated fragments, incorrect binding energies for anions, etc.\cite{PhysRevB.23.5048,doi:10.1063/1.3021474,doi:10.1063/1.4829642,doi:10.1063/1.1630017} The $-1/r$ asymptotic behavior is also important for the computation of electronic properties that are sensitive to virtual orbitals and long-range density such as excited states for example. A number of approaches to remove the SIEs have been proposed.\cite{lindgren1971statistical,PhysRevA.15.2135,perdew1982density, lundin2001novel,doi:10.1063/1.2403848,gidopoulos2012constraining,doi:10.1002/jcc.10279,doi:10.1063/1.2204599,doi:10.1063/1.5129533, doi:10.1063/1.2403848, PhysRevB.76.033102, doi:10.1063/1.4866996} Early approaches\cite{lindgren1971statistical,PhysRevA.15.2135} used orbitalwise schemes to eliminate the SIE but used functionals related to Slater's X$\alpha$ method \cite{slater1951simplification}. The most widely used approach to remove SIE is the one proposed by Perdew and Zunger (PZ)\cite{PhysRevB.23.5048}. Their approach is commonly referred to as PZ self-interaction correction (PZSIC) where the one-electron SIE due to both exchange and correlation are removed from a DFA calculation on an orbital by orbital basis. PZSIC provides the exact cancellation for one- and two-electron self-interaction (SI), but not necessarily for many-electron SI\cite{doi:10.1063/1.2566637}. It has been applied to study properties of atoms, molecules, clusters, and solids.\cite{ doi:10.1063/1.481421, doi:10.1063/1.1327269, doi:10.1063/1.1370527, harbola1996theoretical, doi:10.1063/1.1468640, doi:10.1021/jp014184v,PhysRevA.55.1765,doi:10.1080/00268970110111788, Polo2003, doi:10.1063/1.1630017, B311840A,doi:10.1063/1.1794633, doi:10.1063/1.1897378, doi:10.1063/1.2176608, zope1999atomic, zope2000momentum,fois1993self,doi:10.1063/1.2204599,doi:10.1002/jcc.10279,PhysRevA.45.101, PhysRevA.46.5453,lundin2001novel, PhysRevA.47.165,doi:10.1021/acs.jctc.6b00347,csonka1998inclusion,petit2014phase,kummel2008orbital,schmidt2014one,kao2017role,doi:10.1002/jcc.25586,jonsson2007accurate,rieger1995self,temmerman1999implementation,daene2009self,szotek1991self,messud2008time,messud2008improved,doi:10.1063/1.1926277,korzdorfer2008electrical,korzdorfer2008self,ciofini2005self,PhysRevA.50.2191, doi:10.1063/1.5125205,C9CP06106A,doi:10.1002/jcc.25767,doi:10.1021/acs.jctc.8b00344,doi:10.1063/1.4947042,schwalbe2019pyflosic, doi:10.1063/1.4996498, doi:10.1063/1.5050809, doi:10.1021/acs.jpca.8b09940,Jackson_2019,Sharkas11283} The PZSIC is an orbital dependent theory and when used with the KS orbitals results in size-extensivity problem. In PZSIC, local orbitals are used to keep the corrections size-extensive. Traditionally, PZSIC requires solving the so called Pederson or localization equations (LE)\cite{doi:10.1063/1.446959,doi:10.1063/1.448266} to find the set of local orbitals that minimizes the total energy. Solving the LE and finding the optimal orbitals compliant with the condition is computationally expensive since it requires solving the LE for each pair of orbitals. Pederson \textit{et al.} in 2014 used Fermi-L\"owdin orbitals\cite{Luken1982,Luken1984} (FLOs) to solve the PZSIC equations. This approach is known as FLO-SIC\cite{doi:10.1063/1.4869581,PEDERSON2015153}. FLOs are L\"owdin orthogonalized set of Fermi orbitals (FOs) that can be obtained from the KS orbitals. The FOs depend on the density matrix and spin density. The FLOs are the local orbitals that make PZSIC total energy unitarily invariant. For construction of FLOs, Fermi orbital descriptor (FOD) positions are used as $3N$ parameters in space that can be optimized in analogous manner to the optimization of atomic positions in molecular structure optimization. FLOSIC method has computational advantage over traditional PZSIC since it requires optimizing only $3N$ parameters instead of $N^2$ parameters for the transformation to the local orbitals. Earlier applications of FLO-SIC with LSDA showed significant improvements in atomic and molecular properties over SI-uncorrected LSDA performance\cite{doi:10.1021/acs.jctc.6b00112,doi:10.1063/1.4996498,kao2017role,FLOSICcode}. Naturally, FLOSIC was later also applied to more sophisticated XC functionals than LSDA, such as Perdew–Burke-Ernzerhof (PBE) and Strongly Constrained and Appropriately Normed (SCAN), to see whether SIC improves the performance of those functionals in the higher rungs\cite{doi:10.1063/1.5125205,C9CP06106A,doi:10.1002/jcc.25767,doi:10.1063/1.5050809,Jackson_2019,PhysRevA.100.012505,doi:10.1021/acs.jpca.8b09940,doi:10.1063/1.5125205,doi:10.1063/1.5087065,doi:10.1063/1.5129533,doi:10.1021/acs.jctc.8b00344,Sharkas11283,doi:10.1063/1.5120532,SingHam,doi:10.1063/5.0004738,doi:10.1002/jcc.25586,schwalbe2019pyflosic}. PZSIC when applied to semi-local functional such as PBE GGA and SCAN meta-GGA provides good descriptions in stretched bond situation and provides bound atomic anions but this improvement occurs at the expense of worsening\cite{doi:10.1063/1.1794633,doi:10.1063/1.4752229,doi:10.1063/1.5087065,doi:10.1063/1.5120532,PhysRevA.84.050501,JONSSON20151858} the performance for properties where SI-uncorrected DFA performs well. Shahi \textit{et al.}\cite{doi:10.1063/1.5087065} recently attributed the poor performance of PZSIC with GGAs and higher rung functionals to the nodality of the local orbital densities. The use of complex localized orbitals with nodeless densities in PZSIC calculations by Kl\"upfel, Kl\"upfel and J\'onsson\cite{PhysRevA.84.050501} show that the complex orbital densities alleviate the worsening of atomization energies when used with PBE functional. This conflicting performance of PZSIC is called the paradox of SIC by Perdew and coworkers\cite{PERDEW20151}. The worsening of energetics pertaining to equilibrium region primarily is a result of the overcorrecting tendency of PZSIC. A few methods have been proposed to mitigate the overcorrecting tendency of PZSIC by scaling down the SIC contribution. J\'onsson's group simply scaled the SIC by a constant scaling factor\cite{doi:10.1063/1.4752229}. In a similar spirit, Vydrov \textit{et al.} proposed a method to scale down the SIC according to an orbital dependent scaling factor (OSIC)\cite{doi:10.1063/1.2176608}. This method however does not provide significant improvement over all properties. It improved over PZSIC atomization energies but worsened barrier heights. Moreover, the scaling approach by Vydrov \textit{et al.} results in worsening the asymptotic description of the effective potential causing atomic anions to be unbound. Ruzsinszky \textit{et al.}\cite{doi:10.1063/1.2387954} found that many-electron SIE and fractional-charge dissociation behavior of positively charged dimers reappear in the OSIC of Vydrov {\it et al.}. A new selective OSIC method, called SOSIC, by Yamamoto and coworkers\cite{doi:10.1063/5.0004738} that selectively scales down the SIC in many electron regions overcomes the deficiencies of the OSIC method and gives stable atomic anions as well as improved total atomic energies. It also improves the barrier heights over the OSIC method. Very recently, Zope {\it et al.}\cite{doi:10.1063/1.5129533} proposed a new SIC method which identifies the single-electron region using iso-orbital indicators and corrects for SIE in a pointwise fashion by scaling down the SIC. The iso-orbital indicator serves as a weight in numerical integration and identifies both the single-orbital regions where full correction is needed and the uniform density regions where the DFAs are already exact and correction is not needed. They called the new SIC method local-SIC (LSIC)\cite{doi:10.1063/1.5129533} and assesed its performance for a wide array of properties using LSDA. Unlike the PZSIC, the LSIC provided remarkable performance for both equilibrium properties like atomization energies and stretched bond situations that occur in barrier height calculation. The LSIC method makes use of iso-orbital indicator to identify one-electron region. It offers additional degree of freedom in that suitable iso-orbital can be used or designed to identify one-electron region or tune the SIC contribution in a pointwise manner. In the original LSIC work, Zope {\it et al.} used a ratio of von Weisz\"acker and total kinetic energy densities as a choice for the local scaling factor. This iso-orbital indicator has been used in construction of self-correlation free meta-GGAs, in the regional SIC\cite{doi:10.1002/jcc.10279} and also in local hybrid functionals\cite{jaramillo2003local,doi:10.1063/1.4865942}. Several different choices for the local scaling factors are already available in literature. Alternatively, new iso-orbital indicators particularly suitable for LSIC can be constructed. In this work, we explore the performance of the LSIC method using a simple ratio of the orbital density and spin density as weight of SIC correction at a given point in space. This is the same scaling factor used by Slater to average the Hartree-Fock exchange potential in his classic work that introduced Hartree-Fock-Slater method\cite{slater1951simplification}. We refer to this choice of scaling factor as LSIC($w$) for the remainder of this manuscript and use LSIC($z$) to refer to the first LSIC application where the scaling factor is the ratio of von Weisz\"acker kinetic energy and kinetic energy densities. We investigate the performance of LSIC($w$) for a few atomic properties: total energy, ionization potentials, and electron affinities. For molecules, we calculated the total energies, atomization energies, and the dissociation energies of a few selected systems. We find that LSIC($w$) provides comparable results to LSIC($z$). We also show a case where LSIC($w$) performs better than the original LSIC($z$). Additionaly, we examine the performance of the scaling factor $w$ based on the density ratio with the OSIC scheme. In the following section, brief descriptions of the PZSIC, OSIC, and LSIC methods are presented. These methods are implemented using the FLOs. Therefore, very brief definitions pertaining to FLOs are also presented. The results and discussion are presented in the next sections. \section{\label{sec:s2} Theory and computational method} \subsection{Perdew-Zunger and Fermi-Lowdin Self-Interaction Correction} In the PZSIC method\cite{PhysRevB.23.5048}, SIE is removed on an orbital by orbital basis from the DFA energy as \begin{equation}\label{eq:pzsic} \begin{aligned} E^{PZSIC-DFA}&=E^{DFA}[\rho_{\uparrow},\rho_{\downarrow}] -\sum_{i\sigma}^{occ}\left\{ U[\rho_{i\sigma}]+E_{XC}^{DFA}[\rho_{i\sigma},0] \right\}, \end{aligned} \end{equation} where $i$ is the orbital index, $\sigma$ is the spin index, $\rho$ ($\rho_{i\sigma}$) is the electron density (local orbital density), $U[\rho_{i\sigma}]$ is the exact self-Coulomb energy, and $E_{XC}^{DFA}[\rho_{i\sigma},0]$ is the self-exchange-correlation energy for a given DFA XC functional. Perdew and Zunger applied this scheme to atoms using the Kohn-Sham orbitals. For larger systems the Kohn-Sham orbitals can be delocalized which would result in the violation of size extensivity. Therefore local orbitals are required. This was recognized long ago by Slater and Wood\cite{slater1970statistical} in 1971 and was also emphasized by Gopinathan\cite{PhysRevA.15.2135} in the context of self-interaction-correction of Hartree-Slater method and later by Perdew and Zunger in the context of approximate Kohn-Sham calculations. Subsequent PZSIC calculations by Wisconsin group\cite{PhysRevB.28.5992,Harrison_1983,doi:10.1063/1.446959,doi:10.1063/1.448266,Harrison_1983} used local orbitals in variational implementation. It was shown by Pederson and coworkers that local orbitals used in the Eq. (\ref{eq:pzsic}) must satisfy the localization equations (LE) for variational minimization of energy. The LE for the orbitals $\phi_{i\sigma}$ is a pairwise condition and is given as \begin{equation}\label{eq:LE} \langle\phi_{i\sigma} |V_{i\sigma}^{SIC}-V_{j\sigma}^{SIC} | \phi_{j\sigma}\rangle=0. \end{equation} In the FLOSIC approach, FLOs are used in stead of directly solving the Eq. (\ref{eq:LE}). First, FOs $\phi^{FO}$ are constructed with the density matrix and spin density at special positions in space called Fermi orbital descriptor (FOD) positions as \begin{equation}\label{eq:3} \phi_{i }^{FO}(\vec{r}) = \frac{ \sum_{j}^{N_{occ}} \psi_{j}( \vec{a_{i}})\psi_{j}(\vec{r}) } { \sqrt{\rho_{i}(\vec{a_{i}}) }}. \end{equation} Here, $i$ and $j$ are the orbital indexes, and $\psi$ is the KS orbital, $\rho_{i}$ is the electron spin density, and $\vec{a_{i}}$ is the FOD position. The FOs are then orthogonalized with the L\"owdin's scheme to form the FLOs. The FLOs are used for the calculation of the SIC energy and potential. In this method, the optimal set of FLOs are found by finding the FODs that minimizes total energy. This optimization process is similar to that for geometry optimization. We note that FLOs can be used in all three SIC (PZSIC, OSIC, and LSIC) methods. \subsection{Orbitalwise scaling of SIC} As mentioned in Sec. \ref{sec:introduction}, PZSIC tends to overcorrect the DFA energies and several modifications to PZISC were proposed to \textit{scale down} the PZSIC correction. In the OSIC method of Vydrov \textit{et al}\cite{doi:10.1063/1.2176608} mentioned in Introduction Eq. (\ref{eq:pzsic}) is modified to \begin{equation}\label{eq:orbsic} \begin{aligned} E^{OSIC-DFA}&=E_{XC}^{DFA}[\rho_\uparrow,\rho_\downarrow] -\sum_{i\sigma}^{occ}X_{i\sigma}^{k}\left(U[\rho_{i\sigma}]+E_{XC}^{DFA}[\rho_{i\sigma},0] \right), \end{aligned} \end{equation} where each local orbitalwise scaling factor $X_{i\sigma}^k$ is defined as \begin{equation}\label{eq:OSIC_scalingfactor} X^{k}_{i\sigma}=\int z_\sigma^k(\vec{r}) \rho_{i\sigma}(\vec{r})d^3\vec{r}. \end{equation} Here, $i$ indicates the orbital index, $\sigma$ is the spin index, $z_\sigma$ is the iso-orbital indicator, and $k$ is an integer. The quantity $z_\sigma$ is used to interpolate the single-electron regions ($z_\sigma=1$) and uniform density region ($z_\sigma=0$). In their original work, Vydrov \textit{et al.} used $z_\sigma = \tau_\sigma^W/\tau_\sigma$ to study the performance of OSIC with various XC functionals where $\tau_{\sigma}^W(\vec{r}) = |\vec{\nabla}\rho_{\sigma}(\vec{r})|^2/(8\rho_{\sigma}(\vec{r}))$ is the von Weisz\"acker kinetic energy density and $\tau_{\sigma}(\vec{r})=\frac{1}{2}\sum_i |\vec{\nabla}\psi_{i\sigma}(\vec{r})|^2$ is the non-interacting kinetic energy density. Satisfying the gradient expansion in $\rho$ requires $k\geq1$ for LSDA, $k\geq2$ for GGAs, and $k\geq3$ for meta-GGA. Vydrov \textit{et al.}, however, used various values of $k$ to study its effect on the OSIC performance. In their subsequent work, Vydrov \textit{et al.}\cite{doi:10.1063/1.2204599} used \begin{equation}\label{eq:rhoi_rho} w_{i\sigma}^k(\vec{r})=\left(\frac{\rho_{i\sigma}(\vec{r})}{\rho_\sigma(\vec{r})}\right)^k, \end{equation} the weight used by Slater in averaging Hartree-Fock potential, as a scaling factor instead of kinetic energy ratio. They repeated the OSIC calculations using $w_{i\sigma}$ in place of $z_{\sigma}$ in Eq. (\ref{eq:OSIC_scalingfactor}). Notice that Eq. (\ref{eq:rhoi_rho}) contains a local orbital index, this weight is thus an orbital dependent quantity. $w_{i\sigma}$ approaches unity at single orbital regions since $\rho_\sigma(\vec{r}) = \rho_{i\sigma}(\vec{r})$ at this limit. Similarly, $w_{i\sigma}$ approaches zero at many-electron region since $\rho_\sigma(\vec{r}) \gg \rho_{i\sigma}(\vec{r})$ at this condition. It was reported that the OSIC with Eq. (\ref{eq:rhoi_rho}) showed comparable performance as $z_\sigma = \tau_\sigma^W/\tau_\sigma$ despite of its simpler form. \subsection{LSIC} Though OSIC had some success in improving the performance with SIC, the approach leads to parameter $k$ dependent performance. Also, it gives $-X_{HO}/r$ asymptotic potential instead of $-1/r$ for finite neutral systems and it results in inaccurate description of dissociation behavior\cite{doi:10.1063/1.2566637}. In addition, many-electron SIE and fractional-charge dissociation behavior of positively charged dimers reemerge with the OSIC\cite{doi:10.1063/1.2387954}. The recent LSIC method by Zope {\it et al.} applies the SIC in a different way than OSIC and retains desirable beneficial features of PZSIC. In LSIC, the SIC energy density is scaled down \textit{locally} as follows, \begin{equation}\label{eq:LSIC} \begin{aligned} E_{XC}^{LSIC-DFA}&= E_{XC}^{DFA}[\rho_{\uparrow},\rho_{\downarrow}] -\sum_{i\sigma}^{occ} \left( U^{LSIC}[\rho_{i\sigma}] +E_{XC}^{LSIC}[\rho_{i\sigma},0]\right), \end{aligned} \end{equation} where \begin{equation}\label{eq:LSIC_U} U^{LSIC}[\rho_{i\sigma}]=\frac{1}{2}\int d^3\vec{r} \,z_\sigma(\vec{r})^k \,\rho_{i\sigma}(\vec{r})\int d^3\vec{r'}\,\frac{\rho_{i\sigma}(\vec{r'})}{|\vec{r}-\vec{r'}|}, \end{equation} \begin{equation}\label{eq:LSIC_XC} E_{XC}^{LSIC}[\rho_{i\sigma},0]=\int d^3\vec{r} \, z_\sigma(\vec{r})^k \,\rho_{i\sigma}(\vec{r}) \epsilon_{XC}^{DFA}([\rho_{i\sigma},0],\vec{r}). \end{equation} LSIC uses an iso-orbital indicator to apply SIC pointwise in space. An ideal choice of iso-orbital indicator should be such that LSIC reduces to DFA in uniform gas limit and reduces to PZSIC in the pure one-electron limit. To demonstrate the LSIC concept Zope \textit{et al.} used $z_\sigma=\tau_\sigma^W/\tau_\sigma$ as an iso-orbital indicator. In this study, however, we use $w_{i\sigma}(\vec{r}) = \rho_{i\sigma}(\vec{r}) / \rho_{\sigma}(\vec{r})$ in place for $z_\sigma$ in Eqs. (\ref{eq:LSIC_U}) and (\ref{eq:LSIC_XC}). We refer to the LSIC with $z_{\sigma}(\vec{r})$ as LSIC($z$) and LSIC with $w_{i\sigma}(\vec{r})$ as LSIC($w$) to differentiate the two cases. \subsection{Computational details} All of the calculations were performed using the developmental version of FLOSIC code\cite{FLOSICcode,FLOSICcodep}, a software based on the UTEP-NRLMOL code. PZSIC, OSIC, and LSIC methods using FLOs are implemented in this code. FLOSIC/NRLMOL code uses Gaussian type orbitals\cite{PhysRevA.60.2840} whose default basis sets are in similar quality as quadruple zeta basis sets. We used the NRLMOL default basis sets throughout our calculations. For calculations of atomic anions, long range s, p, and d single Gaussian orbitals are added to give a better description of the extended nature of anions. The exponents $\beta$ of these added single Gaussians were obtained using the relation, $\beta(N+1)=\beta(N)^2/\beta(N-1)$, where $N$ is the $N$-th exponent. FLOSIC code uses a variational integration mesh\cite{PhysRevB.41.7453} that provides accurate numerical integration. In this work, our focus is on the LSDA functional because LSIC applied to LSDA is free from the gauge problem\cite{doi:10.1063/5.0010375} unlike GGAs and meta-GGAs where a gauge transformation is needed since their XC potentials are not in the Hartree gauge. We used an SCF energy convergence criteria of $10^{-6}$ Ha for the total energy and an FOD force tolerance of $10^{-3}$ Ha/bohr for FOD optimizations in FLOSIC calculations. For OSIC and LSIC calculations, we used respective FLOSIC densities and FODs as a starting point and performed a non-self-consistent calculation of energy on the FLOSIC densities. Several values for the scaling power $k$ are used in the LSIC($w$) and OSIC($w$) calculations. The additional computational cost of the scaling factor in OSIC and LSIC is very small compared to a regular FLO-PZSIC calculation. \section{\label{sec:results} Results and discussion} The LSIC method was assessed for a wide array of electronic structure properties to obtain a good understanding of how the new methodology performs. Here, we asses the performance of LSIC($w$) vis-a-vis LSIC($z$) and OSIC($w$) using the same array of electronic properties. We considered total energies, ionization potentials, and electron affinities for atoms and atomization energies, reaction barrier heights, and dissociation energies for molecules. \subsection{\label{sec:s3s1} Atoms } In this section, we present our results on total energies, ionization potentials, and electron affinities for atoms. \subsubsection{\label{sec:s3s2} Total energy of atoms} We compared the total atomic energies of the atoms $Z=1-18$ against accurate non-relativistic values reported by Chakravorty \textit{et al.}\cite{PhysRevA.47.3649}. Various integer values of $k$ were used for LSIC($w$) and OSIC($w$). The differences between our calculated total energies with $k=1$ and the reference values are plotted in Fig. \ref{fig:atoms-diff}. The plot clearly shows the effect of scaling on the total energies of atoms. Consistent with reported results, the LSDA total energies are too high compared to accurate reference values\cite{PhysRevA.47.3649} whereas PZSIC consistently underestimates the total energies due to its over correcting tendency. The LSIC method, where both scaling factors performs similarly, provides the total energies closer to the reference values than LSDA and PZSIC-LSDA. Likewise, OSIC method also reduces the overcorrection bringing the total energies to close agreement with the reference values. The mean absolute errors (MAEs) in total energy with respect to the reference for various $k$ values are shown in Table \ref{tab:table1}. The MAE of PZSIC is $0.381$ Ha whereas LSIC($w$) and OSIC($w$) show MAEs of $0.061$ and $0.074$ Ha, respectively, with $k=1$. LSIC($z$) shows a better performance than OSIC($w$) and LSIC($w$). The LSIC($w$) MAE is in the same order of magnitude as the earlier reported MAE of LSIC($z$) of 0.041 Ha\cite{doi:10.1063/1.5129533}. As the value of $k$ increases, the magnitude of SI-correction is reduced. This result in MAEs become larger for $k > 1$ eventually approaching the LSDA numbers. For $k=0$ the scaled methods correctly produce the PZSIC results. The scaling is optimal for $k=1$ which results in optimal magnitude of SI-correction for LSIC($w$) and almost right magnitude for OSIC($w$). The magnitude of SIC energy of each orbitals when compared among different methods, it is found that the SIC correction in LSIC($w$) is larger (i.e. less scaling down) for the core orbitals than in the LSIC($z$). This trend is reversed for the valence orbitals (cf. Table \ref{table:sic_amount}). It can be seem from Table \ref{table:sic_amount} that total SIC energy in both methods is essentially similar in magnitude. However the way scaling factors behave affects the orbitalwise contribution to the total SIC energy. This changes the SIC potentials and results in two methods performing differently for cations and anions. For OSIC($w$), we find the smallest MAE for $k=2$ of $0.070$ Ha, a value slightly smaller than that for $k=1$. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{atoms.eps} \caption{Total energy difference (in hartree) of atoms $Z=1-18$ with respect to accurate nonrelativistic estimates\cite{PhysRevA.47.3649}.} \label{fig:atoms-diff} \end{figure} \begin{table} \caption{\label{tab:table1}Mean absolute error of the total atomic energy (in hartree) for atoms $Z=1-18$ with respect to accurate nonrelativistic estimates\cite{PhysRevA.47.3649}. } \begin{tabular*}{0.68\textwidth}{@{\extracolsep{\fill}}lc} \toprule Method &MAE\\ \midrule PZSIC & 0.381\\ LSIC($z, k=1$) & 0.041\\\hline LSIC($w, k=1$) & 0.061\\ LSIC($w, k=2$) & 0.196\\ LSIC($w, k=3$) & 0.277\\ LSIC($w, k=4$) & 0.332\\ \hline OSIC($w, k=1$) & 0.074\\ OSIC($w, k=2$) & 0.070\\ OSIC($w, k=3$) & 0.135\\ \bottomrule \end{tabular*} \end{table} \begin{table} \caption{\label{table:sic_amount}Magnitude of SIC energy (in hartree) per orbital type in Ar atom for each method.} \begin{tabular*}{0.68\textwidth}{@{\extracolsep{\fill}}ccccc} \toprule Orbital & PZSIC & LSIC($z$) & LSIC($w$) & OSIC($w$)\\ \midrule 1s &-0.741 &-0.387 &-0.490 &-0.584 \\ 2sp$^3$ &-0.126 &-0.070 &-0.050 &-0.062 \\ 3sp$^3$ &-0.016 &-0.017 &-0.006 &-0.008 \\ \midrule Total SIC &-2.616 &-1.473 &-1.421 &-1.729 \\ \bottomrule \end{tabular*} \end{table} \subsubsection{\label{sec:s3s3} Ionization potential} The ionization potential (IP) is the energy required to remove an electron from the outermost orbital. Since electron removal energy is related to the asymptotic shape of the potential, one can expect SIC plays an important role in determining IPs. We calculated the IPs using the $\Delta$SCF method defined as \begin{equation} E_{IP}=E_{cat}-E_{neut} \end{equation} where $E_{cat}$ is the total energy in the cationic state and $E_{neut}$ is the total energy at the neutral state. The calculations were performed for atoms from helium to krypton, and we compared the computed IPs against the experimental ionization energies \cite{NIST_ASD}. FODs were relaxed both for neutral atoms and for their cations. Fig. \ref{fig:atoms-IP} shows the difference of calculated IPs with respect to the reference values. MAEs with different methods are shown in Table \ref{tab:table2} for a subset $Z=2-18$ as well as for the entire set $Z=2-36$ to facilitate a comparison against literature. For the smaller subset, $Z=2-18$, the MAEs are $0.248$ and $0.206$ eV for PZSIC and LSIC($z$), respectively. The MAE for OSIC($w$, $k=1$) is $0.223$ eV showing an improvement over PZSIC. LSIC($w$, $k=1$) shows MAE of $0.251$ eV, a comparable error with PZSIC. MAEs increase for LSIC($w$, $k\geq2$) and OSIC($w$, $k\geq2$) in comparison to their respective $k=1$ MAEs. Interestingly, however, when we considered the entire set of atoms ($Z=2-36$), LSIC($w$) has MAEs of $0.238$ and $0.216$ eV for $k=1$ and $k=2$ respectively showing smaller errors than PZSIC (MAE, $0.364$ eV) but LSIC($w$) falls short of LSIC($z$) which has the smallest error (MAE, 0.170 eV). For this case, OSIC($w$, $k=1-3$) shows better performance than PZSIC but not as well as LSIC($w$) for a given $k$. LSIC($z$) performs better than both LSIC($w$) and OSIC($w$). The difference in performance between LSIC($z$) and LSIC($w$) implies that scaling of SIC for the cationic states is more sensitive to the choice of local scaling factor than for the neutral atoms. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{atoms_dscf-IP.eps} \caption{Energy difference in ionization potential (in eV) for a set of atoms $Z=2-36$ with respect to experimental values\cite{NIST_ASD}.} \label{fig:atoms-IP} \end{figure} \begin{table} \caption{\label{tab:table2} Mean absolute error of ionization potentials (in eV) for set of atoms $Z=2-18$ and $Z=2-36$ with respect to experiment\cite{NIST_ASD}.} \begin{tabular*}{0.68\textwidth}{@{\extracolsep{\fill}}lcc} \toprule Method &Z=2-18 (17-IPs) & Z=2-36 (35-IPs)\\ \midrule PZSIC & 0.248 & 0.364\\ LSIC($z, k=1$) & 0.206 & 0.170 \\\midrule LSIC($w, k=1$) & 0.251 & 0.238 \\ LSIC($w, k=2$) & 0.271 & 0.216\\ LSIC($w, k=3$) & 0.297 & 0.247\\ LSIC($w, k=4$) & 0.324 & 0.284\\\hline OSIC($w, k=1$) & 0.223 & 0.267\\ OSIC($w, k=2$) & 0.247 & 0.247\\ OSIC($w, k=3$) & 0.255 & 0.259\\ \bottomrule \end{tabular*} \end{table} \subsubsection{\label{sec:s3s4} Electron affinity } The electron affinity (EA) is the energy released when an electron is added to the system. We studied EAs for 20 atoms that are experimentally found to bind an electron\cite{NIST_CCCBD}. They are H, Li, B, C, O, F, Na, Al, Si, P, S, Cl, K, Ti, Cu, Ga, Ge, As, Se, and Br atoms. The EAs were calculated using the $\Delta$SCF method $E_{EA}=E_{neut}-E_{anion}$ and values were compared against the experimental EAs\cite{NIST_CCCBD}. Fig. \ref{fig:atoms-EA} shows deviation of EA from reference experimental values for various methods. The MAEs are summarized in Table \ref{tab:table3}. We have presented the MAEs in two sets, the smaller subset which contains hydrogen through chlorine (12 EAs) and for the complete set, hydrogen to bromine (20 EAs). For 12 EAs, MAEs for PZSIC and LSIC($z$) are $0.152$ and $0.097$ eV, respectively. OSIC($w$) shows MAE of $0.152$ eV for $k=1$, the same performance as PZSIC. LSIC($w$), however, does not perform as well as PZSIC, giving the MAEs of $0.235$ eV for $k=1$. In both case, the error decrease slightly for $k\geq2$ but there is no significant impact on their performance. For 20 EAs, the similar trend persists. PZSIC and LSIC($z$) have MAEs of $0.190$ and $0.102$ eV, respectively. The MAEs of LSIC($w$) are in the range $0.176-0.224$ eV for $k=1-4$ and those of OSIC($w$) are between $0.155-0.172$ eV for $k=1-3$. Again, decrease in error is observed as the value in $k$ increases. In particular, larger discrepancy between LSIC($w$,$k=1$) and experiment is seen for O, F, and Ti atoms. This is due to LSIC($w$) raising the anion energies more than their neutral state energies. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{atoms_scf-EA.eps} \caption{Electron affinity (eV) difference for atoms $Z=2-36$ with respect to experiment\cite{NIST_CCCBD}.} \label{fig:atoms-EA} \end{figure} \begin{table} \caption{\label{tab:table3}Mean absolute error in electron affinities (in eV) for 12 EAs and 20 EAs set of atoms with respect to experiment\cite{NIST_CCCBD}.} \begin{tabular*}{0.68\textwidth}{@{\extracolsep{\fill}}lcc} \toprule Method &(12 EAs) MAE & (20 EAs) MAE \\ \midrule PZSIC & 0.152 & 0.190\\ LSIC($z, k=1$) & 0.097 & 0.102 \\\midrule LSIC($w, k=1$) & 0.235 & 0.224\\ LSIC($w, k=2$) & 0.229 & 0.205\\ LSIC($w, k=3$) & 0.215 & 0.189\\ LSIC($w, k=4$) & 0.202 & 0.176\\ \midrule OSIC($w, k=1$) & 0.152 & 0.172\\ OSIC($w, k=2$) & 0.150 & 0.164\\ OSIC($w, k=3$) & 0.145 & 0.155\\ \bottomrule \end{tabular*} \end{table} \subsection{\label{sec:s3s5} Atomization energy} To study the performance of LSIC($w$) for molecules, first, we calculated the atomization energies (AEs) of 37 selected molecules. Many of these molecules are subset of the G2/97 test set\cite{doi:10.1063/1.460205}. The 37 molecules set includes systems from the AE6 set\cite{doi:10.1021/jp035287b}, small but a good representative of the main group atomization energy (MGAE109) set\cite{doi:10.1063/1.3663871}. The AEs were calculated by taking the energy difference of fragment atoms and the complex, that is, $ AE=\sum_{i}^{N_{atom}}E_i-E_{mol}>0.$ $E_i$ is the total energy of an atom, $E_{mol}$ is the total energy of the molecule, and $N_{atom}$ is the number of atoms in the molecule. The calculated AEs were compared to the non-spin-orbit coupling reference values\cite{doi:10.1063/1.3663871} for AE6 set and to the experimental values\cite{NIST_CCCBD} for the entire set of 37 molecules. The percentage errors obtained through various methods are shown in Fig. \ref{fig:atoms-AE}. The overestimation of AEs with PZSIC-LSDA due to overcorrection is rectified in LSIC($w$). We have summarized MAEs and mean absolute percentage errors (MAPEs) of AE6 and 37 molecules from G2 set in Table \ref{tab:table4}. For AE6 set, MAEs for PZSIC, LSIC($z$), LSIC($w,k=1$), and OSIC($w,k=1$) are $57.9$, $9.9$, $13.8$, and $33.7$ kcal/mol respectively. The MAE in LSIC($z$) is about 4 kcal/mol larger than LSIC($w,k=1$) but substantially better than the PZSICs or OSIC($w$). For the larger $k$ in LSIC($w$), however, the performance starts to degrade with consistent increase in the MAE of $33.5$ kcal/mol for $k=4$. This is in contrast to OSIC where the performance improves for $k=2$ and $3$ compared to $k=1$. The scaling thus affect differently in the two methods. OSIC($w,k=1$) tends to slightly underestimate total energies. By increasing $k$, total energies shift toward the LSDA total energies and improves performance for moderate increase in $k$. On the contrary, total energies are slightly overestimated for LSIC($w,k=1$), and increasing $k$ makes the energies deviate away from the accurate estimates. OSIC($w,k=3$) and LSIC($w,k=1$) have a similar core orbital SIC energy. In their study of OSIC($w$), Vydrov and Scuseria\cite{doi:10.1063/1.2204599} used values of $k$ up to 5 and found the smallest error of $k=5$ (MAE, $11.5$ kcal/mol). But we expect the OSIC performance to degrade eventually for large $k$ since increase in $k$ results in increase in quenching of the SIC correction thus the results will eventually approach to those of DFA, in this case LSDA. For the full set of 37 molecules, PZSIC, LSIC($z$), LSIC($w,k=1$), and OSIC($w,k=1$) show the MAPEs of $13.4$, $6.9$, $9.5$ and $11.9$\%, respectively. OSIC($w$) shows a slight improvement in MAPE for $k=2$ and $3$. For the larger set, LSIC($w$) consistently shows smaller MAPEs than OSIC($w$) for $k=1-3$. All four values of $k$ with the LSIC($w$) in this study showed better performance than PZSIC for the 37 molecules set. \begin{figure} \centering \includegraphics[width=0.68\columnwidth]{atomization.eps} \caption{Percentage errors of atomization energy (\%) for a set of 37 molecules with respect to experimental values\cite{NIST_CCCBD} using different scaling methods.} \label{fig:atoms-AE} \end{figure} \begin{table} \caption{\label{tab:table4}Mean absolute error (in kcal/mol) and mean absolute percentage error (in \%) of atomization energy for AE6 set of molecules \cite{doi:10.1063/1.3663871} and for the set of 37 molecules from G2 set with respect to experiment\cite{NIST_CCCBD}.} \begin{tabular*}{0.68\textwidth}{@{\extracolsep{\fill}}lccc} \toprule & AE6 MAE & AE6 & 37 molecules \\ Method & (kcal/mol) & MAPE (\%) & MAPE (\%) \\ \midrule PZSIC & 57.9 & 9.4 & 13.4\\ LSIC($z, k=1)$ & 9.9 & 3.2 & 6.9\\\midrule LSIC($w, k=1$) & 13.8 & 4.4 & 9.5\\ LSIC($w, k=2$) & 18.6 & 5.3 & 9.1\\ LSIC($w, k=3$) & 26.9 & 5.8 & 9.2\\ LSIC($w, k=4$) & 33.5 & 6.7 & 9.7\\\midrule OSIC($w, k=1$) & 33.7 & 6.3 & 11.9\\ OSIC($w, k=2$) & 24.1 & 5.1 & 11.3\\ OSIC($w, k=3$) & 17.8 & 4.3 & 10.9\\ \bottomrule \end{tabular*} \end{table} \subsection{\label{sec:s3s6} Barrier heights } Accurate description of chemical reaction barrier is challenging for DFAs since it involves calculation of energies in non-equilibrium situations. In most of the cases, the saddle point energies are underestimated since DFAs do not perform well for a non-equilibrium state that involves a stretched bond. This shortcoming of DFAs in a stretched bond case arises from SIE; when an electron is shared and stretched out, SIE incorrectly lowers the energy of transition state. SIC handles the stretched bond states accurately and provides a correct picture in chemical reaction paths. We studied the reaction barriers using the BH6\cite{doi:10.1021/jp035287b} set of molecules for LSIC($w$) method. BH6 is a representative subset of the larger BH24\cite{doi:10.1021/ct600281g} set consisting of three reactions OH + CH$_4$ $\rightarrow$ CH$_3$ + H$_2$O, H + OH $\rightarrow$ H$_2$ + O, and H + H$_2$S $\rightarrow$ H$_2$ + HS. We calculated the total energies of left- and right-hand side and at the saddle point of these chemical reactions. The barrier heights for the forward (f) and reverse (r) reactions were obtained by taking the energy differences of their corresponding reaction states. The mean errors (MEs) and MAEs of computed barrier heights against the reference values\cite{doi:10.1021/jp035287b} are compared in Table \ref{tab:table5}. MAEs for PZSIC, LSIC($z$), LSIC($w, k=1$), and OSIC($w,k=1$) are $4.8$, $1.3$, $3.6$, and $3.6$ kcal/mol, respectively. PZSIC significantly improves MAE compared to LSDA (MAE, 17.6 kcal/mol), LSIC($w, k=1$) further reduces the error from PZSIC. Its ME and MAE indicate that there is no systematic underestimation or overestimation. LSIC($w, k=1$) also further improves the PZSIC numbers but not to the same level as LSIC($z$). For $k\geq2$, MAEs increases systematically for LSIC($w, k\geq2$) though small MEs are seen for LSIC($w, k=2,3$). The performance deteriorates for $k>2$ beyond that of PZSIC. OSIC($w$) shows marginally better performance than PZSIC. Vydrov and Scuseria\cite{doi:10.1063/1.2204599} showed that the best performance is achieved with $k=1$ (MAE, $3.5$ kcal/mol). The performance improvement with OSIC is not as dramatic as LSICs in terms of MEs and MAEs where the rather large MEs are seen i Overall LSIC($w$) performs better than OSIC($w$) for barrier heights. \begin{table} \caption{\label{tab:table5}Mean error (in kcal/mol) and mean absolute error (in kcal/mol) of BH6 sets of chemical reactions\cite{doi:10.1021/jp035287b}.} \begin{tabular*}{0.68\textwidth}{@{\extracolsep{\fill}}lccc} \toprule Method & ME (kcal/mol) & MAE (kcal/mol) \\ \midrule PZSIC & -4.8 & 4.8 \\ LSIC($z, k=1$) & 0.7 & 1.3 \\\midrule LSIC($w, k=1$) & -1.0 & 3.6 \\ LSIC($w, k=2$) & -0.1 & 4.6 \\ LSIC($w, k=3$) & 0.3 & 5.0 \\ LSIC($w, k=4$) & 0.6 & 5.5 \\\midrule OSIC($w, k=1$) & -3.4 & 3.6 \\ OSIC($w, k=2$) & -3.1 & 4.1 \\ OSIC($w, k=3$) & -3.0 & 4.6 \\ \bottomrule \end{tabular*} \end{table} \subsection{\label{sec:s3s7}Dissociation and reaction energies} A pronounced manifestation of SIE is seen in dissociation of positively charged dimers $X_2^+$. SIE causes the system to dissociate into two fractionally charges cations instead of $X$ and $X^{+}$. Here we use the SIE4x4\cite{C7CP04913G} and SIE11\cite{doi:10.1021/ct900489g} sets to study the performance of LSIC($w$) and OSIC($w$) in correcting the SIEs. The SIE4x4 set consists of dissociation energy calculations of four positively charged dimers at varying bond distances $R$ from their equilibrium distance $R_e$ such that $R/R_e$ = 1.0, 1.25, 1.5 and 1.75. The dissociation energy $E_D$ is calculated as \begin{equation} E_D=E(X)+E(X^+)-E(X_2^+). \end{equation} The SIE11 set consists of eleven reaction energy calculations: five cationic reactions and six neutral reactions. These two sets are commonly used for studying the SIE related problems. The calculated dissociation and reaction energies are compared against the CCSD(T) reference values\cite{C7CP04913G,doi:10.1021/ct900489g}, and MAEs are summarized in Table \ref{tab:table6}. For the SIE4x4 set, PZSIC, LSIC($z$), LSIC($w, k=1$), and OSIC($w,k=1$) show MAEs of $3.0$, $2.6$, $4.7$ and $5.2$ kcal/mol. LSIC($z$) provides small improvement in equilibrium energies while keeping accurate behavior of PZSIC at the dissociation limit resulting in marginally better performance. LSIC($w$) shows errors a few kcal/mol larger than PZSIC. This increase in error arises because LSIC($w$) alters the (NH$_3$)$_2^+$ and (H$_2$O)$^+_2$ dissociation curves. In LSIC($z$) the scaling of SIC occurs mostly for the core orbitals (Cf. Table \ref{table:sic_amount}) whereas LSIC($w$) also includes some noticeable scaling down effect from valence orbitals. This different scaling behavior seems to contribute to different dissociation curves. Lastly, OSIC($w$) has a slightly larger error than LSIC($w$). For the SIE11 set, MAEs are $11.5$, $4.5$, $8.3$, and $11.1$ kcal/mol for PZSIC, LSIC($z$), LSIC($w, k=1$), and OSIC($w,k=1$), respectively. All scaled-down approaches we considered, LSIC($z$) and LSIC($w$), and OSIC($w$) showed performance improvement over PZSIC. LSIC($z$) shows the largest error reduction by 60\%, while LSIC($w, k=1$) shows 28\% decrease in error with respect to PZSIC. OSIC($w$) with $k=1-3$ has slightly smaller MAEs within 1 kcal/mol of PZSIC. LSIC($z$) method improves cationic reactions more than neutral reactions with respect to PZSIC. Increase in $k$ beyond 2 results in too much suppression of SIC and leads to increase in error for LSIC($w, k\geq2$). LSIC($w$) yielded consistently smaller MAEs than OSIC($w$) but larger than LSIC($z$) over the whole SIE11 reactions. Finally, we show the ground-state dissociation curves for H$^+_2$ and He$^+_2$ in Fig. \ref{fig:dissociation}. As previously discussed in literature \cite{doi:10.1021/jp0534479}, DFAs at large separation cause the complexes to dissociate into two fragments atoms. PZSIC restores the correct dissociation behavior at the large separation distance. When LSIC is applied, the behavior of PZSIC at the dissociation limit is preserved in both LSIC($z$) and present LSIC($w$). For H$_2^+$, a one-electron system, LSIC reproduces the identical behavior as PZSIC [Fig. \ref{fig:dissociation} (a)]. For He$_2^+$, a three-electron system, LSIC applies the correction to PZSIC only near equilibrium regime [Fig. \ref{fig:dissociation} (b)]. LSIC brings the equilibrium energy closer to the CCSD energy compared to PZSIC energy. The implication of Fig. \ref{fig:dissociation} is that the present scaling factor $w$ performs well in differentiating the single-orbital like regions and many-electron like regions. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{h_hp.eps} \includegraphics[width=0.8\columnwidth]{he_hep.eps} \caption{Dissociation curves of (a) H$_2^+$ and (b) He$_2^+$ using various methods. The CCSD(T) curve from Ref.~[\onlinecite{doi:10.1063/1.2566637}] is plotted for comparison.} \label{fig:dissociation} \end{figure} \begin{table} \caption{\label{tab:table6} Mean absolute error for dissociation and reaction energies (in kcal/mol) of SIE4x4 and SIE11 sets of chemical reactions with respect to CCSD(T)\cite{C7CP04913G,doi:10.1021/ct900489g}.} \begin{tabular*}{0.68\textwidth}{@{\extracolsep{\fill}}lcccc} \toprule Reaction & SIE4x4 & SIE11 & SIE11 & SIE11 \\ & & & 5 cationic & 6 neutral \\ \hline PZSIC & 3.0 & 11.5 & 14.9 & 8.7 \\ LSIC($z$) & 2.6 & 4.5 & 2.3 & 6.3 \\\midrule LSIC($w$) (k=1) & 4.7 & 8.3 & 8.6 & 8.0 \\ LSIC($w$) (k=2) & 5.5 & 8.3 & 8.3 & 8.3 \\ LSIC($w$) (k=3) & 5.8 & 8.8 & 8.2 & 9.3 \\ LSIC($w$) (k=4) & 5.9 & 9.3 & 8.2 & 10.2\\\midrule OSIC($w$) (k=1) & 5.2 & 11.1 & 13.7 & 9.0 \\ OSIC($w$) (k=2) & 6.0 & 11.0 & 13.5 & 9.0 \\ OSIC($w$) (k=3) & 6.4 & 10.9 & 13.3 & 8.8\\ \bottomrule \end{tabular*} \end{table} \subsection{ Water binding energies: a case where LSIC($z$) performs poorly} Kamal \textit{et al.}\cite{Sharkas11283} recently studied binding energies of small water clusters using the PZSIC method in conjunction with FLOs to examine the effect of SIC on binding energies of these systems. Water clusters are bonded by weaker hydrogen bonds and provide a different class of systems to test the performance of the LSIC method. Earlier studies using LSIC($z$) on the polarizabilities and ionization have shown that LSIC($z$) provides an excellent descriptions of these properties when compared to the CCSD(T) results\cite{C9CP06106A,waterpolarizability}. Here, we study the binding energies of the water clusters. We find that the choice of iso-orbital indicator plays crucial role in water cluster binding energies. The structures considered in this work are (H$_2$O)$_n$ ($n=1-6$) whose geometries are from the WATER27 set\cite{doi:10.1021/ct800549f} optimized at the B3LYP/6-311++G(2d,2p) theory. The hexamer structure has a few known isomers, and we considered the book (b), cage (c), prism (p), and ring (r) isomers. The results are compared against the CCSD(T)-F12b values from Ref.~[\onlinecite{doi:10.1021/acs.jctc.6b01046}] in Table \ref{tab:waterbinding}. We obtained the MAEs of 118.9, 172.1, and 46.9 meV/H$_2$O for PZSIC, LSIC($z$), and LSIC($w$), respectively. Thus, LSIC($z$) underestimates binding energies of water cluster by roughly similar magnitude as LSDA (MAE, 183.4 meV/H$_2$O). This is one case where LSIC($z$) does not improve over PZSIC. A simple explanation for this behavior of LSIC($z$) is that although $z_\sigma$ used in LSIC($z$) can detect the weak bond regions, it ($z_\sigma$) cannot differentiate the slow-varying density regions from weak bond regions. The $z_\sigma \rightarrow 0$ in the both situations causing the weak regions to be improperly treated. Fig. \ref{fig:waterbind} (a) shows $z_\sigma$ for water dimer where both slow-varying density and weak interaction regions are detected but not differentiated. As a result, the total energies of the complex shift too much in comparison to the individual water molecules. Thus, the underestimation of water cluster binding energies is due to the choice of $z$ and not the LSIC method. Indeed by choosing the $w$ as a scaling parameter, the binding energies are much improved. Fig. \ref{fig:waterbind} (b) shows there is no discontinuity of $w$ between the two water molecules ($w_i$'s of two FLOs along the hydrogen bond are plotted together in the figure). Hence unlike in LSIC($z$), weak interacting region is not improperly scaled down with LSIC($w$). LSIC($w$) shows MAE of 46.9 meV/H$_2$O comparable to SCAN (MAE, 35.2 meV/H$_2$O). This result is interesting as SCAN uses a function that can identify weak bond interaction. So LSIC($w$)-LSDA may be behaving qualitatively similar to the detection function in SCAN in weak bond regions. The study of water binding energies is so far a unique case where the original LSIC($z$) performed poorly. But LSIC can be improved by simply using a different iso-orbital indicator. This case serves as a motivation in identifying appropriate iso-orbital indicator that would work for all bonding regions in LSIC. \begin{table} \caption{\label{tab:waterbinding}The binding energy of water clusters (in meV/H$_2$O).} \begin{tabular*}{0.68\textwidth}{@{\extracolsep{\fill}}lcccc} \toprule $n$ & PZSIC & LSIC($z$) & LSIC($w$) & CCSD(T)$^\textit{a}$\\ \midrule 2 & -153.7 & -34.9 & -82.7 & -108.6 \\ 3 & -321.6 & -73.9 & -183.0 & -228.4 \\ 4 & -425.2 & -125.0 & -248.6 & -297.0 \\ 5 & -446.9 & -142.7 & -264.8 & -311.4 \\ 6b & -467.1 & -133.6 & -275.0 & -327.3 \\ 6c & -466.8 & -113.9 & -274.8 & -330.5 \\ 6p & -467.7 & -104.8 & -276.2 & -332.4 \\ 6r & -458.1 & -150.5 & -275.5 & -320.1 \\ \midrule MAE & 118.9 & 172.1 & 46.9 & \\ \bottomrule \multicolumn{5}{l}{$^\textit{a}$Reference~[\onlinecite{doi:10.1021/acs.jctc.6b01046}]}\\ \end{tabular*} \end{table} \begin{figure} \centering \includegraphics[width=0.68\columnwidth]{waterdimer_a.eps} \includegraphics[width=0.68\columnwidth]{waterdimer_b.eps} \caption{Cross sectional plots of the iso-orbital indicators for water cluster dimer: (a) $\tau^W/\tau$ and (b) $\rho_i/\rho$'s from the two FLOs along the hydrogen bond.} \label{fig:waterbind} \end{figure} We now provide a qualitative explanation of why LSIC($w$) gives improved results over PZSIC. This reasoning is along the same line as offered by Zope \textit{et al.} \cite{doi:10.1063/1.5129533}. As mentioned in Sec. \ref{sec:introduction}, when the self-interaction-errors are removed using PZSIC, improved description of barrier heights which involve stretch bonds is obtained but the equilibrium properties like total energies, atomization energies etc. are usually deteriorated compared to the uncorrected functional. This is especially so for the functionals that go beyond the simple LSDA. Typically this is because of over correcting tendency of PZSIC. The non-empirical semilocal DFA functionals are designed to be exact in the uniform electron gas limit, this exact condition is no longer satisfied when PZSIC is applied to the functionals\cite{doi:10.1063/1.5090534}. This can be seen from the exchange energies of noble gas atoms and the extrapolation using the large-$Z$ expansion of $E_X$ as shown in Fig. \ref{fig:unifform_gas_lim}. Following Ref.~[\onlinecite{doi:10.1063/1.5090534}] we plot atomic exchange energies as a function of $z^{-1/3}$. Thus, the region near origin corresponds to the uniform gas limit. The plot was obtained by fitting the exchange exchange energies ($E_X$) of Ne, Ar, Kr, and Xe atoms (within LSIC($w$)-LSDA, LSIC($z$)-LSDA, and LSDA) for Ne, Ar, Kr, and Xe atoms to the curve using the following fitting function\cite{doi:10.1063/1.5090534}. \begin{equation} \frac{E_X^{approx}-E_X^{exact}}{E_X^{exact}}\times 100\%=a+bx^2+cx^3, \end{equation} where $x=Z^{-1/3}$ and a, b, and c are the fitting parameters. The LSDA is exact in the uniform gas limit. So also is LSIC($z$) since the scaling factor $z_\sigma$ approaches zero as the gradient of electron density vanishes while the kinetic energy density in the denominator remains finite. The small deviation from zero seen near origin (in Fig. \ref{fig:unifform_gas_lim}) for LSIC($z$) is due to the fitting error (due to limited data point). This error is $-0.62\%$ for LSIC($z$). Thus correcting LSDA using PZSIC introduces large error in the uniform gas limit. The scaling factor $w$ used here identifies single-electron region since the density ratio approaches one in this limit. Fig. \ref{fig:unifform_gas_lim} shows that present LSIC($w$) approach also recovers the lost uniform limit gas. This partly explains the success of LSIC($w$). Though performance of LSIC ($w$) is substantially better than PZSIC-LSDA it falls short of LSIC($z$). On the other hand, unlike LSIC($z$) it provides good description of weak hydrogen bonds highlighting the need of identifying suitable iso-orbital indicators or scaling factor(s) to apply pointwise SIC using LSIC method. One possible choice may be scaling factor that are functions of $\alpha$ used in construction of SCAN meta-GGA and recently proposed\cite{PhysRevB.99.041119} $\beta$ parameter. A scaling factor containing $\beta$ recently used by Yamamoto and coworkers with OSIC scheme showed improved results\cite{doi:10.1063/5.0004738}. Future work would involve designing suitable scaling factors involving $\beta$ for use in LSIC method. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{uniff_g_lim.eps} \caption{ Plot of percentage error of the approximated exchange energy compared to the exact exchange energy as a function of $Z^{-1/3}$.} \label{fig:unifform_gas_lim} \end{figure} \section{\label{sec:conclusion} Conclusions} To recapitulate, we investigated the performance of LSIC with a simple scaling factor, $w$, that depends only on orbital and spin densities. Performance assessment has been carried out on atomic energies, atomization energies, ionization potentials, electron affinities, barrier heights, dissociation energies etc on standard data sets of molecules. The results show that LSIC($w$) performs better than PZSIC for all properties with exception of electron affinity and a SIE4x4 subset of dissociation energies. We also compared the performance of $w$ for LSIC against OSIC of Vydrov \textit{et al}. Results indicate that although OSIC overall performs better than PZSIC, the improvement over PZSIC is somewhat limited. On the other hand, LSIC($w$) is consistently better than OSIC($w$). We have also studied the binding energies of small water clusters which are bonded by weak hydrogen bonds. Here, the LSIC($w$) performs very well compared to both the PZSIC and LSIC($z$) with performance comparable to SCAN. The present work shows the promise of LSIC method and also demonstrates its limitation in describing weak hydrogen bonds if used with kinetic energy ratio, $z_\sigma$ as an iso-orbital indicator. This limitation is due to inability of $z_\sigma$ to distinguish weak bonding regions from slowly varying density regions. The scaling factor $w$ works differently than the scaling factor $z$, hence LSIC($w$) provides good description of weak hydrogen bonds in water clusters. The work thus highlights importance of designing suitable iso-orbital indicator for use with LSIC that can detect weak bonding regions. \section*{Data Availability Statement} The data that supports the findings of this study are available within the article and the supplementary information. \section*{Conflicts of interest} There are no conflicts of interest to declare. \section*{Acknowledgement} Authors acknowledge Drs. Luis Basurto, Carlos Diaz, and Po-Hao Chang for discussions and technical supports. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, as part of the Computational Chemical Sciences Program under Award No. DE-SC0018331. Support for computational time at the Texas Advanced Computing Center through NSF Grant No. TG-DMR090071, and at NERSC is gratefully acknowledged. \clearpage
proofpile-arXiv_059-15731
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} \label{sec_introduction} In response to rising concerns regarding the effect of aviation emissions on the climate, the design of modern gas turbine combustors is drastically changing. Current strategies to reduce NO$_x$ and CO$_2$ emissions rely on lean combustion and optimization of the combustor design to reduce the engine weight and complexity. These new concepts raise the critical issue of high altitude relight, which is regarded as one of the more stringent constraint on the aeronautical gas turbine design. Most engines in circulation were designed using empirical correlations resulting from extensive experimental test campaigns~\cite{Lefebvre:1998}. Today high performance numerical tools are available and play an increasing role in the context of development cost saving. The stochastic nature of the ignition process has been well highlighted with multiple experiments reported in the literature. Stochasticity originates from variations of the size and strength of the energy deposited by the ignition system~\cite{Kono:1984}, the turbulent flow and the reactants mixing at the sparking location~\cite{Mastorakos:2009,Sforzo:2015}, and the large-scale flow variations in the combustor~\cite{Ahmed:2007a,Cordier:2013}. Owing to these observations, ignition performances are quantified with ignition probability $P_{ign}$ maps~\cite{Ahmed:2007a,Cordier:2013}. Conditions that maximize $P_{ign}$ are: 1) high flammability and/or rapid fuel availability at the spark location, 2) low turbulence intensity around the spark location, and 3) large-scale flow patterns allowing the flame to propagate toward the burner nozzle. The detailed analysis of the ignition process also reveals that ignition success is not solely conditioned by the local flow properties at the igniter position, but also by the flow conditions along the flame kernel trajectory after sparking. From the numerical point of view, the transient and stochastic nature of ignition calls for the Large Eddy Simulation (LES) approach, which has been proven to accurately predict ignition in configurations representative of gas turbines~\cite{Boileau:2008, Triantafyllidis:2009, Subramanian:2010, Jones:2010}. However, although the direct prediction of the ignition probability using LES has been proven feasible~\cite{Esclapez:2015}, building a full ignition probability map is not possible due to the computational cost of tens of simulations of ignition sequences at each point of the map. Rapid and computationally affordable evaluation of ignition probability maps was first proposed in the pioneering work of Birch and co-worker~\cite{Birch:1977, Birch:1981,Smith:1986} who developed a model using experimental measurements of fuel distribution and velocity. They distinguished between the $P_{ign}$ and the kernel initiation probability $P_{ker}$, which was shown to be correlated to the flammability factor $F_f$ defined as the local probability of the mixture to be in flammable conditions. These early studies recently inspired the development of more advanced methods which can be sorted in two classes: 1) the probability is evaluated from the flow properties at the sparking location only, 2) the model tracks the spatio-temporal evolution of the ignition kernel to evaluate its chance of igniting the burner. In the first class several criteria, based on the local flammability, turbulence intensity and velocity direction, are evaluated from the non-reacting flow to evaluate the success of ignition. Stochasticity is retrieved from the analysis of multiple instantaneous flow fields which are combined as independent initial states leading to independent ignition events to construct the ignition probability~\cite{Linassier:2013, Eyssartier:2013}. These methods are computationally fast and provide a good estimation of $P_{ker}$ but they usually fail to predict $P_{ign}$, as they ignore the subsequent flame kernel evolution. Model of the second class were initiated by Wilson \emph{et al.}~\cite{Wilson:1999}, where the dispersion of a conserved scalar in simulation of the non-reacting flow is used to track possible kernel trajectories. A more recent attempt ~\cite{Richardson:2007b, Weckering:2011, Neophytou:2012} introduced the Lagrangian tracking of representative flame particles, adding artificial stochasticity to the mean flow. In both methods, the Karlovitz number was used to evaluate the occurrence of flame quenching~\cite{Soworka:2014}. These methods are intrinsically well suited to capture transient flame kernel motion and expansion, although they were found much sensitive to the success criteria thresholds and required multiple simulations to obtain converged statistics. Additionally, they do not use the true flow statistics along the kernel trajectories, which is known to strongly vary spatially in complex geometries. Note that all these methods stay valid only as long as the flame kernel stays small, and should not be used once it has given birth to a large turbulent flame which modifies the flow. In particular it should not be used to predict annular light-round in azimuthal combustors where burnt gas expansion greatly affect the flame propagation In this work, a reduced order model to predict the ignition probability of modern gas turbine combustors (i.e., featuring one or more recirculation zones stabilizing the flame) is proposed. In contrast with the Monte-Carlo approach used by Neophytou \emph{et al.} \cite{Neophytou:2012}, the model includes the real, local flow statistics along the kernel trajectories, which can be extracted from time-averaged non-reactive flow quantities. This allows to take into account the complexity of the flow in the combustion chamber for an improved prediction of ignition probability and of its sensitivity to the geometrical design. The model development and test are based on experiment and simulation of a lean swirled burner operated in premixed, non-premixed and two-phase flow combustion modes~\cite{Cordier:2013,Collin-Bastiani:2018}, well representative of real gas turbine conditions and flows. The paper is organized as follows. Section~\ref{sec_configuration} introduces the experimental set-up and the numerical results upon which the model is developed and tested. In Section~\ref{sec_model} the Model for Ignition STatistics (MIST) is derived and in Section~\ref{sec_results} the results of MIST applied to the test configuration are presented. Finally, the model outputs and performances are discussed, and future developments are provided in the conclusion. \section{Test configuration} \label{sec_configuration} \subsection{Experimental configuration} \label{ssec:XPsetup} The experimental configuration employed to evaluate the model performances was specifically designed by Cordier \emph{et al.}~\cite{Cordier:2013, Cordier:2013b} to study ignition in complex flows, representative of realistic gas turbines, first with gas only (methane) and later with liquid fuel injection ($n$-heptane)~\cite{Marrero-Santiago:2017,Collin-Bastiani:2018}. A picture of the test rig is presented in Fig.~\ref{fig:geom}(a). The burner is capable of operating in premixed ($P$), non-premixed ($NP$) and spray ($SP$) modes at two levels of swirl intensity. It is made of four major components, namely a plenum, a swirled injection system, a combustion chamber and a convergent exhaust. The flow entering the plenum is first tranquilized through three grids before entering the swirler vanes. The combustion chamber has a 100 mm side length square section and is 260 mm long. A convergent exhaust ends the combustion chamber to avoid air admission induced by the swirling flow. Finally, the injection system is composed of a central jet ($d = 4$ mm) nested within the annular swirl stream ($D_{in} = 9$ mm, $D_{ext} = 20$ mm) of the swirler, the latter consisting of 18 radially fed channels inclined by 45 degrees. In $P$ mode, both the central tube and the plenum are fed with a methane/air mixture whereas in $NP$ mode the central jet is fed with pure methane and the plenum is fed with air. In spray mode, the central jet injection tube is replaced by a simplex pressure atomizer (Danfoss, $1.46 kg/h$, $80^o$ hollow cone) fueling liquid $n$-heptane. All experimental operating conditions of modes $P$, $NP$ and $SP$ are summarized in Tab.~\ref{Tbl:expe_cond}. Contrary to gaseous cases, air and fuel are preheated in the $SP$ case and a leaner regime is studied. In non-reacting conditions, stereoscopic particle image velocimetry (SPIV) is used to measure the three components of velocity in a 50 mm $\times$ 67 mm field of view. Statistics of velocity are computed from 1000 images. PDA measurements were used to characterize the liquid phase in terms of droplet size and size-classified velocity. To measure the fuel mole fraction field, planar laser induced fluorescence (PLIF) based on acetone is used in $NP$ mode while Toluene-PLIF is preferred in $SP$ mode~\cite{Marrero-Santiago:2017b}. Ignition is triggered by laser-induced breakdown allowing a non-intrusive control of the deposit location, duration and strength. Ignition probability maps are constructed using 50 and 30 ignition trials at each deposit location for gaseous cases and the $SP$ case, respectively. This results in a maximum error of the probability of about 7\% and 9\%, respectively. \begin{table} \caption{Summary of experimental operating conditions in modes $P$, $NP$ and $SP$. \label{Tbl:expe_cond}} \centering \begin{tabular}{| l | c c c |} \hline & P & NP & SP \\ \hline \hline Central jet $\dot{m}_{Air}$ $(g/s)$ & $0.224$ & - & - \\ \hline Plenum $\dot{m}_{Air}$ $(g/s)$& $5.37$ & $5.43$ & 8.2 \\ \hline Central jet $\dot{m}_{Fuel}$ $(g/s)$& $0.009$ & $0.234$ & 0.33 \\ \hline Plenum $\dot{m}_{Fuel}$ $(g/s)$& $0.233$ & - & - \\ \hline $\phi_{glob}$ & $0.75$ & $0.75$ & $0.61$ \\ \hline $T_{in}$ (gas) $(K)$& 300 & 300 & 416 \\ \hline $T_{Fuel}$ (liquid) $(K)$& - & - & 350 \\ \hline \end{tabular} \end{table} \subsection{Large Eddy Simulation set-up} \label{ssec:LESsetup} All simulations were performed with AVBP, an explicit cell-vertex massively-parallel code solving compressible reacting flows~\cite{Gicquel:2011}. The equations and models used in the present study are standard ones in LES solvers and a full description can be found in the review of Gicquel \emph{et al.}~\cite{Gicquel:2012}. The third order accurate in space and time numerical scheme TTGC~\cite{Colin:2000a} is used. Inlet and outlet boundary conditions are treated according to the NSCBC formulation~\cite{Poinsot:1992}. while non-slipping walls are considered. Turbulent sub-grid stresses are modeled using the SIGMA model~\cite{Nicoud:2011}. In the $SP$ mode, a Lagrangian approach is retained for the dispersed phase description using models for drag, evaporation and injection (FIM-UR model) already presented in a previous study~\cite{Shum-Kivan:2017}. The prescribed droplet size distribution is fitted to experimental data using a Rosin-Rammler distribution with a spread of the distribution $q = 2.3$ and a mean Sauter diameter is $d _ { p } ^ { S M D } = 31~\mu m$. \begin{figure*}[htbp!] \centering \includegraphics[width=0.8\textwidth]{./FIG/Configuration_V2.pdf} \caption{(a) Experimental test rig. (b): Numerical geometry and injection system details. Main components are: 1. Plenum, 2. Injection system, 3. Combustion chamber, 4. Convergent exit. (c): Cut through the computational domain showing the mesh refinement near the central gaseous injection ($P$ and $NP$ cases).} \label{fig:geom} \end{figure*} The computational domain includes the four components of the experimental configuration as shown in Fig.~\ref{fig:geom}(b). The domain is discretized into a fully unstructured mesh using 22 million tetrahedral elements shown in Fig.~\ref{fig:geom}(c), with a cell size about 150 $\mu m$ in the swirler and the mixing region and about 800 $\mu m$ in the rest of the combustion chamber. The axial direction is referred to as the $z$-axis, corresponding to the main flow direction, while the $x$-axis and $y$-axis denote the transverse directions. Space dimensions are non-dimensionalized by the injection system exit diameter $D_{ext}$. Flow statistics are collected for over $150~ms$ after reaching the stationary average state. \subsection{Non reactive LES results} \label{ssec:LES_NR} The flow pattern shown in Fig.~\ref{fig:stream} is typical of highly swirled configurations: the Swirled Jet (SWJ) issued from the injection system generates a reverse flow along the central axis referred to as Inner Recirculation Zone (IRZ). The IRZ closes downstream at $z/D_{ext}=10$ due to the presence of the convergent exhaust. Because of the confined environment, the SWJ also induces recirculation on its outer side, referred to as Corner Recirculation Zones (CRZ), closed at $z/D_{ext}=3$ in the gaseous cases and at $z/D_{ext}=2.5$ in the $SP$ case. The gaseous flow exiting the central injection in $P$ and $NP$ cases meets the back flow of the IRZ at $z/D_{ext}=0.4$, generating a zero axial velocity stagnation point. In the $SP$ case, the $n$-heptane injection momentum leads to a stagnation point almost at the injector surface. Finally, strong shear layers develop between the SWJ and both IRZ and CRZ. Note that the appearance of vortex breakdown and the formation of the IRZ occurs as the swirl number exceed a critical value ($S_{w,crit} = 0.707 $~\cite{Billant:1998}). In the $NP$ case, the swirl number has been measured experimentally $S_{w,Exp.} = 0.76$ and a very close value $S_{w,LES} = 0.78$ has been computed from the LES results. \begin{figure*}[ht!] \centering \includegraphics[width=0.85\textwidth]{./FIG/Streamlines_MONO} \caption{$NP$ case, non-reacting flow. Time-averaged pseudo-streamlines in a central $x$-normal plane (left) and $z$-normal plane (right). Swirled Jet (SWJ, red), Inner Recirculation Zone (IRZ, blue) and Corner Recirculation Zone (CRZ, green). Boxes respectively indicate the experimental ignition maps for $P$, $NP$ (plain) and $SP$ (dashed) cases.} \label{fig:stream} \end{figure*} Detailed comparison of the non-reacting LES prediction against experiment for the $P$ case has been reported in a previous publication~\cite{Barre:2013}. Similar comparison is presented in~\ref{app:valid_cold} for the $NP$ and $SP$ cases. All show a very good agreement and authorize the development of the ignition model on the basis of LES results. A focus is now made on mixing, which is critical for ignition in both $NP$ and $SP$ cases. Fig.~\ref{fig:Mixing}(left) shows the mean flammability factor for case $NP$: \begin{equation} F_f = \int_{Z_{lean}}^{Z_{rich}} P(Z)\;dZ \label{eq:Ff} \end{equation} \noindent where $P(Z)$ is the probability density function (PDF) of the mixture fraction $Z$ (using the definition of Bilger \cite{Bilger:1989}) and $Z_{lean}$ and $Z_{rich}$ are the lower and upper flammability limits, respectively. Since the overall equivalence ratio is flammable, $F_f$ is unity in most of the combustion chamber where all the species are well mixed, and reaches 0 only close to the methane and air inlets. Intermediate values of $F_f$ are found in the wake of the air swirled jet, between the rich injection and the pure air. The IRZ is mostly filled with premixed flammable mixture. The mixture fraction PDF extracted along the arrows (a) and (b) of Fig.~\ref{fig:Mixing}(left) and displayed in Fig.~\ref{fig:Mixing}(right) show the variety of $P(Z)$ and the strong inhomogeneity in these zones. In the $SP$ case, evaporation and mixing effects reflect on the gaseous and liquid equivalence ratio maps $\phi_g$ and $\phi_l$, shown in Fig.~\ref{fig:evap_mix_spray}. Due to the preheated conditions, droplets evaporate quickly leading to $\phi_l > 1$ in the spray jet zone for $z/D_{ext}< 1$. Almost no droplets are found in the upper part of the spray zone and even less in the IRZ and CRZ. The entire CRZ is characterized by a very homogeneous gaseous equivalence ratio close to the global value $\phi_{glob} = 0.61$, whereas the IRZ is leaner ($\phi_g$ < 0.5), close to the lean flammability limit. \begin{figure*}[ht!] \centering \includegraphics[width=0.7\textwidth]{./FIG/FlamFact_PDFmix_V2.pdf} \caption{$NP$ case. Mean flammability factor field in a central $x$-normal plane with $Z$ iso-lines (left) and $P(Z)$ along arrows (a) (top) and (b) (bottom) in the mixing region (right). The grey area highlights the flammable mixture interval.} \label{fig:Mixing} \end{figure*} \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth]{./FIG/phig_phil_cold_V3.pdf} \caption{$SP$ case. Maps of the cold flow gaseous equivalence ratio $\phi_g$ (left) and liquid equivalence ratio $\phi_l$ (right). The highlighted $\phi_g = 0.5$ corresponds to the lean flammability limit of \emph{n}-heptane which also marks the transition between weakly evaporation-controlled flames ($\phi_g > 0.5$) and evaporation-controlled flames (Section \ref{sssec:mixture}).} \label{fig:evap_mix_spray} \end{figure} \section{The MIST Model} \label{sec_model} The prediction of the ignition probability is classically based on the combination of kernel motion statistics with local flow properties. However in contrast with previous methodologies~\cite{Neophytou:2012, Linassier:2013, Eyssartier:2013}, here the flame kernel trajectory statistics are built from the non-reacting flow statistics. The objective of MIST is to predict the probability of creating a large enough flame kernel, that can subsequently stabilize on the injector. Capturing the flame stabilization process itself is not in the scope of MIST since LES has shown that the kernel expansion can significantly modify the instantaneous velocity field in the combustor \cite{Jones:2010,Barre:2014}, rendering inaccurate the cold flow statistics upon which MIST is based. Although failure to stabilize the flame after the kernel occupies a significant portion of the combustion chamber has been observed experimentally \cite{Read:2008}, we believe this mode of failure marginally affects the overall ignition probability compared to the critical stage of creating an expanding flame kernel. In the present experimental test case, such failure mode was not observed. Additionally, ignition stochasticity mostly occurs in the first instants of ignition, when local turbulence and mixing along the kernel trajectory completely control the flame kernel survival, whereas at later time local turbulence and mixture composition only affect the ability of flame fronts to propagate locally. The model can be decomposed in four steps: \begin{enumerate} \item Extract from a non-reacting flow solution the mean and rms of the velocity ($\overline{\boldsymbol{u}}$, $\boldsymbol{u'}$) and mixture fraction ($\overline{Z}$, $Z'$). Liquid volume fraction moments ($\overline{\alpha_l}$, $\alpha_l'$), mean droplet diameter ($\overline{d_l}$), and mean droplet velocity ($\overline{\boldsymbol{u_l}}$) are also required for the $SP$ case. If LES is used, statistics are obtained from time-averaging. \item Use the spark characteristics to evaluate the kernel initial size and the time required for cooling from the sparking temperature to the burnt gas temperature. This step is performed in 0D assuming that the kernel temperature evolution is dictated by the balance between combustion heat release and turbulent dissipation. \item Compute quenching criteria from the non-reacting flow statistics. \item Starting from the initial kernel defined in step 2, compute the temporal evolution of kernel motion statistics. This is based on the evolution of the kernel probability of presence $P_{pres}$ constructed from flow statistics obtained in step 1 and the quenching criteria computed in step 3. In this step the kernel size evolution is also computed to determine when it has grown sufficiently to ensure a successful ignition. \end{enumerate} Note that step 1 may be performed with any approach able to give flow statistics, either numerically or with measurements. A flowchart summarizing the main MIST steps described hereafter is provided in ~\ref{app:MISTflowchart}. \subsection{Step 2: Initial kernel} \label{ssec:initkernel} Following the spark discharge, the transition between the hot plasma and a self-sustained flame kernel occurs at temperatures largely above the burnt gas temperature~\cite{Maly:1978}. A detailed description of this transition requires to take into account complex physico-chemical interactions and is out of the scope of the present model. Here the initial kernel development is split in two phases: the kernel growth is first sustained by the high temperature associated with the energy deposit, then it is driven by combustion. During the first phase, the kernel can survive a non-flammable mixture or strong turbulence. This has been observed experimentally in typical gas turbine configuration \cite{Mastorakos:2009} and more recently further studied in a stratified turbulent flow configurations \cite{Sforzo:2015, Sforzo:2017}, where the spark igniter is located in a non-flammable region and the kernel transition from this adverse location to a flammable region is studied. A data-driven model to predict the behavior of the flame kernel during that transition was proposed, highlighting the importance of cold gas entrainment in the kernel \cite{Sforzo:2017}. Such process is not accounted for in the present modeling approach, but could be investigated to adapt the present model to various types of ignition system. The simple model described hereafter aims at evaluating the time required for the kernel to cool down to the burnt gas temperature which will be used in Step 4 to apply extinction criteria. Given the amount of deposited energy $\varepsilon_i$ and the deposit volume $V_s$, the initial kernel temperature $T_{k}^0$ is given by (assuming no reaction during the short deposition duration): \begin{equation} T_{k}^0 = T^0 + \frac{1}{\rho C_p}\frac{\varepsilon_i}{V_s} \end{equation} where $T^0$ is the initial gas temperature, and $\rho$ and $C_p$ are respectively the initial gas density and specific heat. In practice, the computation described hereafter is performed using standard thermodynamics, which are not suited for high-temperature plasma. The maximum temperature is then limited to 5000 K, from which it is possible to evaluate the initial kernel radius assuming that the spark deposit is Gaussian in space (classically used in many DNS and LES of ignition events, see \cite{Lacaze:2009b} for more details). The spark energy used in MIST matches standard value used in previous LES \cite{Esclapez:2015,Collin-Bastiani:2018}: 30 mJ in the $P$ and $NP$ cases, and 25 mJ in the $SP$ case. The kernel temperature $T_k$ then evolves following a 0-dimensional equation: \begin{equation} \frac{dT_k}{dt} = \dot{\omega}_T(\overline{Z}_{flam}) + \frac{D_{th}}{r_k^2}(T^0 - T_k) \label{eq_kernel_balance} \end{equation} The combustion heat release rate $\dot{\omega}_T$ is evaluated at the mean flammable mixture fraction $\overline{Z}_{flam}$ in the sparking zone using the laminar flame expression: \begin{equation} \dot{\omega}_T(\overline{Z}_{flam}) = \frac{Y_F(\overline{Z}_{flam}) \mathcal{Q}_r S_L^0(\overline{Z}_{flam})} {C_p \delta_L^0(\overline{Z}_{flam})} \label{eq:source_combu} \end{equation} with \begin{equation} \overline{Z}_{flam} = \frac{\int_{Z_{lean}}^{Z_{rich}} Z P(Z)\;dZ}{F_f} \label{zflam} \end{equation} In Eq.~\ref{eq:source_combu}, $\mathcal{Q}_r$ is the heat of combustion, and $S_L^0$ and $\delta_L^0$ are the laminar flame speed and thickness. The diffusive heat loss $D_{th}$ is computed with the sum of laminar and turbulent thermal diffusivities, the latter given by~\cite{Akindele:1982}: \begin{align} D_{th,turb} = 0.44 u' l_t \left( 1 - \exp \left( -\frac{u' t}{0.44 l_t} \right) \right) \end{align} where $l_t$ is the integral turbulent scale. The turbulent diffusivity progressively increases with time $t$ from 0 to its fully developed value, in order to reflect that, with time, the kernel interacts with turbulent eddies of increasing size~\cite{Akindele:1982}. Finally, the kernel growth is simply calculated using the laminar flame speed~\cite{Boudier:1992}: \begin{equation} \frac{dr_k}{dt} = \frac{T_k}{T^0}S_L^0(\overline{Z}_{flam}) \label{eq:drk/dt} \end{equation} Resolving Eq.~\ref{eq_kernel_balance} with the flow properties at the spark location leads to the kernel cooling time $t_{CD}$. For two-phase ignition, $S_L^0$ is simply replaced by $S_L^{tp}$ \cite{Rochette:2018} in Eqs.~\ref{eq:source_combu} and~\ref{eq:drk/dt}. To illustrate the outcome of this process, Fig.~\ref{fig:step2} shows $t_{CD}$ as function of $S_L^0$ and $u'$ for a spark energy of 30 mJ and a constant integral length scale of 1 cm. The gas properties used to obtain these results correspond to that of methane/air mixtures, but the range of laminar flame speed has been extended to provide a more complete picture. The range of $u'$ was extracted from the non-reacting LES: the low velocity CRZ are characterized by low levels of turbulence, where $t_{CD}$ can reach around 1 ms, whereas in the highly turbulent shear layer of the SWJ or at the vicinity of the stagnation point, high turbulence level induces a rapid drop of the initial kernel temperature corresponding to a cooling time of the order 10$\sim$100 $\mu$s. \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth]{./FIG/Step2_Tcd_map.pdf} \caption{Kernel cooling time $t_{CD}$ map as function of $S_L^0$ and $u'$ for an initial methane/air kernel with a spark energy of 30 mJ. The vertical dashed line corresponds to the $P$ case laminar flame speed. The vertical arrows indicate typical range of $u'$ in distinct areas of the swirled flow (see Fig.~\ref{fig:stream}).} \label{fig:step2} \end{figure} \subsection{Step 3 : Quenching criteria} \label{ssec_indics} Following previous studies, two major mechanisms leading to kernel quenching are considered: mixing ~\cite{Birch:1977} and flame stretching~\cite{Wilson:1999,Neophytou:2012}. \subsubsection{Mixture composition} \label{sssec:mixture} \paragraph{Gaseous cases} Several ignition studies in non-pre\-mixed flow in the literature clearly point out the fact that the flammability factor $F_f$ is a critical parameter~\cite{Birch:1977, Ahmed:2007a, Neophytou:2012}, closely related to the probability of creating a sustainable flame kernel. As performed in experimental studies~\cite{Birch:1977,Birch:1981}, time-averaged statistics $\overline{Z}$, $Z'$ obtained here from the non-reacting LES are used to construct the flammability factor. It requires however to assume a shape for the probability density function $P(Z)$. In free jets, the combination of the Gaussian and Dirac functions provides a fairly good estimate of $F_f$~\cite{Schefer:2011}. For more complex cases such as swirled flows, the large variety of mixture fraction distributions (see Fig.~\ref{fig:Mixing}) is better represented by a combination of the log-normal and $\beta$-distributions: \begin{equation} F_{f,model} = \gamma F_{f,\beta} + (1 - \gamma) F_{f,logN} \label{eq:Ffmodel} \end{equation} where \begin{align} F_{f,logN} = \frac{1}{2} \Big[ &\text{erf}\left( \frac{\ln(Z_{rich})-\overline{Z}}{\sqrt{2}\;Z'^2}\right) \nonumber \\ - &\text{erf}\left( \frac{\ln(Z_{lean})-\overline{Z}}{\sqrt{2}\;Z'^2}\right) \Big] \end{align} is the log-normal cumulative distribution function, and \begin{equation} F_{f,\beta} = \frac{B_{Z_{rich}}(\alpha,\beta)}{B(\alpha,\beta)} - \frac{B_{Z_{lean}}(\alpha,\beta)}{B(\alpha,\beta)} \end{equation} where $B_{z}(\alpha,\beta)$ is the incomplete $\beta$ function of parameters $\alpha$ and $\beta$ given by: \begin{equation} \alpha = \overline{Z}\left( \frac{\overline{Z}(1-\overline{Z})}{Z'}-1 \right) ; \beta = (1- \overline{Z})\left( \frac{\overline{Z}(1-\overline{Z})}{Z'}-1 \right) \end{equation} The blending factor $\gamma$ is designed so as to make a transition from regions of low mixture fraction fluctuations, where $F_{f,logN}$ is preferred, to regions of high fluctuations where $F_{f,\beta}$ is applied: \begin{equation} \gamma = 0.5 \left(1 + \text{tanh}\left( \frac{Z' - \overline{Z}}{Z_{glob}}\right) \right) \end{equation} Note that $P(Z)$ may be directly extracted from the non-reacting LES. The above method is mostly presented for completeness of the model, and to highlight the importance of including the impact of recirculating gas in the local mixture composition, which was not accounted for in previous studies. The mixture fraction PDF allows to also compute $\overline{Z}_{flam}$ (Eq.~\ref{zflam}), which is a second important quantity for ignition. The accuracy of the predicted values of $F_f$ and $\overline{Z}_{flam}$ in $NP$ case is demonstrated by comparison to the actual values obtained from LES in \ref{app:mixt_stat}. \paragraph{Spray cases In addition to the directly available gaseous fuel, $F_f$ must take into account the evaporating liquid fuel. The characteristic evaporation time: \begin{equation} \tau _ { e v } = \frac { \rho _ { l } d _ { p} ^ { 2 } } { 8 \rho _ { g } D _ { F } \ln \left( 1 + B _ { M } \right) } \label{eq:tauev} \end{equation} is compared to the characteristic combustion time $\tau_{c}(\phi) \approx \delta_l^0(\phi) / S_L^0(\phi)$. In Eq.~\ref{eq:tauev}, $\rho_l$ and $\rho_g$ are the liquid and gaseous densities, $D_F$ is the fuel diffusivity and $B_M$ is the Spalding mass transfer number. Both Eqs.~\ref{eq:Ffmodel} and~\ref{zflam} still hold, with however a modified $F_f$ as described below. Depending on the ratio of the fresh gas equivalence ratio $\phi_g$ to the lean flammability limit $\phi_{lean}$, two archetypes of two-phase kernels are distinguished: weakly evaporation-controlled flames and evaporation-controlled flames: \begin{itemize} \item A weakly evaporation-controlled flame corresponds to $\phi_g > \phi_{lean}$, or where liquid fuel evaporates very promptly: \begin{equation} U^* \frac{\tau_{ev}}{\tau_c} < 1 \label{Ustar} \end{equation} with $U^* = u_l/u_g$ the relative velocity between fuel droplets and the carrier phase. Such a flame is very close to a purely gaseous flame and $F_f$ is estimated as in the gaseous case with Eq.~\ref{eq:Ffmodel} where $\overline{Z} = \overline{Z_{eff}}$ includes the evaporated fuel consumed in the flame of thickness very close to $\delta_l^0$~\cite{Rochette:2018}: \begin{equation} \overline{Z_{eff }} = \overline{Z _ { g }} + \Gamma \, \overline{Z _ { l }} \label{zeff} \end{equation} with $\overline{Z _ { l }}$, $\overline{Z_ { g }}$ the mean liquid and gaseous mixture fractions, and: \begin{equation} \Gamma = \left( \frac { \delta _ { l } ^ { 0 } } { \max \left( \delta _ { e v } , \delta _ { l } ^ { 0 } \right) } \right) ^ { 2 / 3 } \end{equation} where $\delta _ { e v } = u_l * \tau_{ev}$ is the evaporation length. The fluctuating mixture fraction $Z_{eff}'$ originates from turbulent mixing and spray local evaporation. It is assumed here that cold flow evaporation is negligible compared to evaporation in the flame, so that $Z_{eff}'$ may be evaluated as: \begin{equation} Z_{eff}' = \underbrace {Z_g'}_{\substack{\text{turbulent} \\ \text{mixing}}} + \underbrace {\Gamma \frac{\rho_l}{\rho_g} \alpha_l'}_{\substack{\text{evaporation} \\ \text{in the flame}}} \label{zefffluctu} \end{equation} where $Z_g'$ and $\alpha_l'$ are again obtained from the non-reacting flow statistics. \item An evaporation-controlled flame corresponds to $\phi_g < \phi_{lean}$. In that case evaporation is the limiting process in the flame: \begin{equation} U^* \frac{\tau_{ev}}{\tau_c} > 1 \label{weak_case} \end{equation} As a consequence the consumption rate decreases compared to the previous case, and the liquid fuel is burnt as soon as it is evaporated, leading to: \begin{align} \overline{Z _ { e f f }} &= \overline{Z _ { l }} + \overline{Z _ { g }} \label{zeff_evapcontrolled}, \\ Z_{eff}' &= Z_g' + \frac{\rho_l}{\rho_g} \alpha_l'. \label{zeff_fluctu_evapcontrolled} \end{align} Note that in the present configuration, the evaporation-controlled formulation is only used near the spray injection, where the amount of fuel pre-vaporized is below the flammability limit of \emph{n}-heptane (see Fig.~\ref{fig:evap_mix_spray}). It is expected to become more significant in realistic configuration where the incoming air temperature is lower and the volatility of the fuel might be lower. In particular, altitude relight conditions are characterized by low temperature at which very little evaporation occurs prior to ignition and for which the evaporation-controlled formulation is especially adapted. \end{itemize} \subsubsection{Flame stretch} Flame / turbulence interaction may be responsible of significant quenching due to fragmentation of the flame kernel. Following the previous works of~\cite{Wilson:1999,Neophytou:2012}, a criterion based on the Karlovitz number is used. The estimation of $Ka$ is taken from ~\cite{Abdel-Gayed:1985}: \begin{equation} Ka = 0.157\left( \nu \varepsilon \right) ^{1/2} \frac{1}{{S_L^0}^2} \label{eq:Ka_abdel} \end{equation} where $\varepsilon$ is the turbulent dissipation, $\nu$ is the kinematic viscosity. For $SP$ cases, $S_L^0$ is replaced by the two-phase laminar flame speed $S_L^{tp}$ proposed in~\cite{Rochette:2018}. For weakly evaporation controlled flames, $S_L^{tp}\sim S_L^0(\overline{Z}_{flam})$. For evaporation controlled flames $S_L^{tp}$ is much smaller that $S_L^0$ and can be estimated by replacing $\tau_c$ by $\tau_{ev}$: \begin{equation} S _ { L } ^ { t p } = \frac { \delta ^ { 0* } _ { l } } { \tau _ { e v } } \label{slevap}, \end{equation} where $\delta ^ { 0* } _ { l }$ is the flame thickness at the equivalence ratio $\phi^*=min(\phi_{tot},1)$, with $\phi_{tot}=\phi_g + \phi_l$ the total equivalence ratio. The turbulent dissipation $\varepsilon$ may be directly extracted from LES or reconstructed from $\overline{u}$ and $u'$ fields. In the latter case series of instantaneous velocity fields, and their dissipation rate tensor, may be reconstructed assuming a Gaussian distribution. Taking the average over 20-50 reconstructed velocity fields is generally sufficient to ensure a statistically converged value of $\varepsilon$. Quenching occurs when the Karlovitz number is above a critical value $Ka_c$. Different values of $Ka_c$ are proposed in the literature. A value of $Ka_c = 1.5$ is reported in~\cite{Abdel-Gayed:1985, Neophytou:2012} for premixed flames. In \cite{Cordier:2013}, the best agreement of the ignition model with experimental data leads to $Ka_c = 4.5$. This latter value is retained in the present work as it resulted in best overall agreement between MIST and the set of experiment data. Further tuning of this parameter could be required in configurations having flow features not included in the present configuration. \subsection{Step 4 : Kernel trajectories} \label{ssec_ppres} In previous ignition model \cite{Neophytou:2012}, statistics of kernel trajectories were computed using a Monte-Carlo approach, calculating numerous ignition events and kernel trajectories. In contrast, the PDF of presence $p(\boldsymbol{x},r,t)$ of kernels of size $r$ at the location $\boldsymbol{x}$ and time $t$, is here directly obtained from the non-reacting flow statistics. To do so, four assumptions are made: \begin{itemize} \item the velocity components follow a Gaussian distribution, \item kernel trajectory statistics follow a Markov process, \item velocity statistics of the non-reacting flow remain valid during the first instants of ignition (before thermal expansion appears), \item the flame speed is low compared to the flow velocity. \end{itemize} As often made for particle statistics, the PDF $p(\boldsymbol{x},r,t)$ is discretized in $r$-space using $N_{sec}$ sections $S_i$ as depicted in Fig.~\ref{fig:radius_sections}. In each section $i$, $p(\boldsymbol{x},r,t)=p_{i}(\boldsymbol{x},t)$ is constant. \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{./FIG/Sectional_model.pdf} \caption{Breakdown of the kernel size space into sections with transfer rates between consecutive sections.} \label{fig:radius_sections} \end{figure*} From the second assumption, the position $\boldsymbol{x}(t)$ of a kernel follows the Langevin stochastic differential equation (SDE)~\cite{Boughton:1987}: \begin{equation} \frac{\text{d} \boldsymbol{x}(t)}{\text{d} t} = \mu(\boldsymbol{x}) + \sigma(\boldsymbol{x})\eta(t), \label{eq:Langevin} \end{equation} where the initial kernel position $\boldsymbol{x}(t=0) = \boldsymbol{x}_0$ is the spark position. The function $\mu(\boldsymbol{x})$ corresponds to the deterministic (mean) motion while the second term introduces the turbulence effect. $\eta(t)$ is a white noise (stationary, Gaussian random process with zero mean and delta-Dirac autocorrelation). the temporal evolution of $p_i(\boldsymbol{x},t)$ is governed by the Fokker-Planck equation~\cite{Gardiner:2009}: \begin{align} \frac{\partial p_i(\boldsymbol{x},t)}{\partial t} = -& \frac{\partial }{\partial \boldsymbol{x}} (\mu(\boldsymbol{x}) p_i(\boldsymbol{x},t)) \nonumber \\ +& \frac{1}{2} \frac{\partial^2 }{\partial \boldsymbol{x}^2} \left( D_p (\boldsymbol{x}) p_i(\boldsymbol{x},t) \right) \nonumber \\ +& \dot{Q}_i \label{eq:FP} \end{align} The function $\mu(\boldsymbol{x})$ corresponds to the deterministic (mean) motion while the $D_p (\boldsymbol{x})$ introduces turbulence diffusion. These parameters are related to the flow statistics by: \begin{align} \mu(\boldsymbol{x}) &= \overline{\boldsymbol{u}} \\ D_p(\boldsymbol{x}) &=\boldsymbol{u}'^2 \tau \end{align} where $\tau$ is a characteristic time of the flow. In Eq.~\ref{eq:FP} the source term $\dot{Q}_i $ accounts for the transfer between sections due to kernel growth and shrinking. The kernel growth rate is associated to the local turbulent flame speed $S_T(\boldsymbol{x})$, while the kernel shrinking is driven by the turbulent diffusivity $D_{th,turb}(\boldsymbol{x})$. The transfers between two neighboring sections during a time interval $\delta_t$ then write: \begin{equation} T_{G,S_{i}\rightarrow S_{i+1}}(\boldsymbol{x},t) = p_{i}(\boldsymbol{x},t) S_T(\boldsymbol{x}) \delta_t \end{equation} \begin{equation} T_{S,S_{i}\rightarrow S_{i-1}}(\boldsymbol{x},t) = p_{i}(\boldsymbol{x},t) \frac{D_{th,turb}(\boldsymbol{x})}{\overline{r}_i} \delta_t \end{equation} where $\overline{r}_i$ is the mean kernel radius in section $S_i$. The turbulent flame speed is evaluated following~\cite{Abdel-Gayed:1987,Cordier:2013b}: \begin{equation} S _ { T } = S _ { L } ^ { 0 } + n \left( \frac { u ^ { \prime } } { S _ { L } ^ { 0 } } \right) ^ { c } \cdot S _ { L } ^ { 0 } \end{equation} where $n$ and $c$ are model constants from~\cite{Cordier:2013b}. Even if developed in the context of premixed flames, this expression is also used for $NP$ and $SP$ cases as considering an enhancement of the consumption speed by turbulence is still meaningful. Note that $S_L^{tp}$ \cite{Rochette:2018} is used instead of $S_L^0$ in $SP$ case. The source term for each section then depends on the time after deposit $t$ and the local flow properties: \begin{itemize} \item for $t < t_{CD}$, kernels are only growing and the net change of $p_{i}(\boldsymbol{x},t)$ during a time interval $\delta_t$ is given by: \begin{equation} \dot{Q}_i = T_{G,S_{i-1}\rightarrow S_{i}} - T_{G,S_{i}\rightarrow S_{i+1}} \end{equation} \item for $t >= t_{CD}$, if $Ka>Ka_c$ kernels are shrinking due to turbulence and the source term writes: \begin{equation} \dot{Q}_i = T_{S,S_{i+1}\rightarrow S_{i}} - T_{S,S_{i}\rightarrow S_{i-1}} \end{equation} On the contrary if $Ka<Ka_c$, kernels located in flammable mixtures will grow while those located in non-flammable mixtures will shrink: \begin{align} \dot{Q}_i = & F_f(\boldsymbol{x}) ( T_{G,S_{i-1}\rightarrow S_{i}} - T_{G,S_{i}\rightarrow S_{i+1}} ) \nonumber \\ & + (1 - F_f(\boldsymbol{x}) ) ( T_{S,S_{i+1}\rightarrow S_{i}} - T_{S,S_{i}\rightarrow S_{i-1}} ) \end{align} \end{itemize} Below a minimum size $r_f$ with probability $p_{f}(\boldsymbol{x},t)$, ignition is considered failed. $r_f$ is approximated by the laminar flame thickness $\delta_l^0$ at stoichiometry in the $NP$ and $SP$ cases or at the mixture equivalence ratio for the $P$ case. On the other end above a critical size $r_s$ with probability $p_{s}(\boldsymbol{x},t)$, the flow can no longer extinguish the flame kernel and ignition is successful. This critical size is taken equal to the integral length scale of the turbulent flow, corresponding here to $R_{ext}$, the outer radius of the SWJ at the inlet plane. Note that additional success criteria, such as requiring that the flow direction must be directed toward the injector, could be introduced to generalize the model to other type of configurations, but these were not critical in the present case. The set of $N_{seq}$ Eqs.~\ref{eq:FP} is discretized over an unstructured grid similar to the one used to perform the non-reacting LES but note that because the time-average statistic fields are smoother than the instantaneous LES simulation, a coarser mesh could be used. The equations are integrated using a third-order in space and time two-step Taylor Galerkin scheme \cite{Colin:2000} for the advective term while the diffusive term is solved with a second-order finite element scheme. The equations are advanced in time following an explicit CFL constraint based on $\mu(\boldsymbol{x})$. A CFL of 0.7 used in all the results presented hereafter. A set of Eqs.~\ref{eq:FP} is numerically integrated for each sparking location $\boldsymbol{x_0}$. Starting from the initial kernel, all $p_{i}(\boldsymbol{x},t)$ -except the one corresponding to the initial kernel size- first increase progressively, before decreasing down to zero at the end of the simulation, when all kernels have reached either a quenched or ignited state. Therefore only $p_{f}(\boldsymbol{x},t)$ and $p_{s}(\boldsymbol{x},t)$ end with non-zero values, $p_f^{end}(\boldsymbol{x},\boldsymbol{x_0})$ and $p_s^{end}(\boldsymbol{x},\boldsymbol{x_0})$ respectively, and the probability of successful ignition for sparking at $x_0$ is simply: \begin{equation} P_{ign}(\boldsymbol{x_0}) = \int_V p_{s}^{end}(\boldsymbol{x},\boldsymbol{x_0}) \; dV \end{equation} \section{Results} \label{sec_results} The model is now applied to the three operating conditions listed in Table \ref{Tbl:expe_cond}. The model parameters used for each case are listed in Table \ref{tab:model_param}. The choice of the number of sections was motivated by the observation that in most cases studied here, the kernel radius distribution featured a single peak, which can be well reproduced with a relatively low number of sections. Note that the computational cost of the model is directly proportional to the number of sections. \begin{table} \caption{Summary of the model physical and numerical parameters} \centering \begin{tabular}{| l | c c c |} \hline & $P$ & $NP$ & $SP$ \\ \hline \hline r$_{s}$ [m] & 0.01 & 0.01 & 0.008 \\ \hline r$_{f}$ [m] & 0.001 & 0.0008 & 0.001 \\ \hline $Ka_c$ & 4.5 & 4.5 & 4.5 \\ \hline $N_{seq}$ & 12 & 12 & 12 \\ \hline \end{tabular} \label{tab:model_param} \end{table} \subsection{Ignition probability maps} \label{sec:ignit_prob_maps} The results obtained with MIST for case $P$ are compared to the experiment in Fig.~\ref{fig:Pmap}. The map corresponds to the solid line box in Fig.~\ref{fig:stream}. The shape of the ignition probability distribution predicted by MIST is in fairly good agreement with the experiment. A large region of low ignition probability is found along the central axis up to an axial position of $z/D_{ext} = 1.4$, which globally follows the limits of the IRZ. In this premixed case flame stretch is the only quenching mechanism, illustrated in Fig.~\ref{fig:Karlovitz}(left): the Karlovitz number exceeds the critical value $Ka_c = 4.5$ only in the IRZ close to the injection. The low ignition probability is therefore the result of recirculating kernels in the IRZ, subjected to varying but high flame stretch for a long time. Aside from this central region, the ignition probability is 1 everywhere. \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth]{./FIG/Ignit_Map_P_V2.pdf} \caption{$P$ case. Comparison between experimental (left) and MIST (right) ignition probability maps in the solid line box of Fig.~\ref{fig:stream}.} \label{fig:Pmap} \end{figure} The differences between MIST and the experiments are mostly concentrated in the transition between the low and high ignition probability regions, with sharper gradient observed in the model results. This can be expected from the model formulation which predominantly follows the mean kernel trajectory whereas intermediate ignition probability often results of equally probable kernel paths (two or more) which can differ significantly from the mean. Additionally, this case was found to be the most sensitive to the choice of $Ka_c$: value of $Ka_c$ below 2.0 resulted in an over-extended high $Ka$ region encompassing most of the SWJ and the upstream part of the IRZ, and resulting in a wide over-prediction of the low $P_{ign}$ region. With a $4 < Ka_c < 8$, the region of high $Ka$ remains confined close to the stagnation point and results consistent with those of Fig.~\ref{fig:Pmap} were obtained, with the position of the low to high probability transition along the central axis moving downward with increasing $Ka_c$. \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth]{./FIG/Karlovitz_compa_V2.pdf} \caption{Karlovitz number (Eq.~\ref{eq:Ka_abdel}) contours in a central cut plane through the computational domain in the $P$ (left) and $NP$ (right) cases.} \label{fig:Karlovitz} \end{figure} The $NP$ case results are now compared to experiment in Fig.~\ref{fig:NPmap}. Again a good agreement is observed, and in both maps low ignition probability regions are found close to the methane central jet and in the wake of the air SWJ. Contrary to case $P$, the region of high Karlovitz number is very small (Fig.~\ref{fig:Karlovitz}) due to the near stoichiometric conditions in the lower part of the IRZ. In fact the shape of low ignition probability regions closely follow the flammability factor distribution depicted in Fig.~\ref{fig:Mixing}: ignition is mainly controlled by mixing. \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth]{./FIG/Ignit_Map_NP_V2.pdf} \caption{$NP$ case. Comparison between experimental (left) and MIST (right) ignition probability maps in the solid line box of Fig.~\ref{fig:stream}.} \label{fig:NPmap} \end{figure} Finally the comparison with experiment is made for case $SP$ in Fig.~\ref{fig:carto_spray}, in the dashed box of Fig.~\ref{fig:stream}. The agreement is again quite satisfactory. The same overall topology of the ignition probability map is recovered. The entire IRZ is characterized by very low ignition probability, below $0.1$, and the CRZ is the most ignitable region of the chamber, with ignition probability above $0.7$ near the lateral wall. Finally the gradient of $P_{ign}$ more or less coincides with the SWJ, slightly shifted in MIST by around $0.5~D_{ext}$ towards the CRZ. \begin{figure*}[h!] \centering \includegraphics[width=0.6\textwidth]{./FIG/Ignit_Map_SP} \caption{$SP$ case. Comparison between experimental (left) and MIST (right) ignition probability maps in the dashed box of Fig.~\ref{fig:stream}.} \label{fig:carto_spray} \end{figure*} This topology of $P_{ign}$ is strongly related to local non-reacting flow properties $Ka$ and $F_f$ shown in Fig.~\ref{fig:Ka_Ff_spray}. The very homogeneous flammable mixture combined with a low Karlovitz number (due to low velocity fluctuation levels) found in the CRZ explain the very high ignition probability. On the contrary, the IRZ and the bottom of the SWJ are very lean with high velocity fluctuations, leading to high local Karlovitz number above the critical value $Ka_c > 4.5$. \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{./FIG/Karlovitz_Ff_SP.pdf} \caption{$SP$ case. Maps of Karlovitz number (left) and flammability factor (right).} \label{fig:Ka_Ff_spray} \end{figure} \subsection{Detailed analysis} \label{ssec:LES_R} \subsubsection{Premixed case} To illustrate the capabilities of MIST to correctly reproduce the time evolution of kernels, the temporal evolution of kernels of all sizes $P_{pres}(\boldsymbol{x},t)=\int P_{pres}(\boldsymbol{x},r,t)dr$ is shown in Fig.~\ref{fig:prem_transient1} for a sparking location at $(r/D_{ext} = 0.0, z/D_{ext} = 1.0)$ where both experiment and MIST indicate that the ignition probability is close to 0$\%$ (Fig.~\ref{fig:Pmap}). At this location, the mixture is flammable and the low level of turbulence results in $Ka < Ka_c$. However the recirculating mean flow rapidly entrains most kernels towards the high Karlovitz region near the injection system, before they reach a sufficient size to resist the strong local turbulence there. This reflects in the motion of the peak $P_{pres}$ towards the injection system, where it finally vanishes. This behavior is consistent with the ignition failure mechanism observed both experimentally and numerically \cite{Cordier:2013, Barre:2014}. \begin{figure*}[h!] \centering \includegraphics[width=1.0\textwidth]{./FIG/Prem_IgnitSeq_V2.pdf} \caption{$P$ case. Probability density of presence $p(\boldsymbol{x},t)$ of all size kernels in a central cut-plane at four instants for sparking at $(r/D_{ext} = 0.0,z/D_{ext} = 1.0)$.} \label{fig:prem_transient1} \end{figure*} Intermediate values of the ignition probability found at the limit of the IRZ, correspond to an increased proportion of kernels that have time to reach a sufficient size before entering the high $Ka$ region. Two scenarios are observed: 1) a fast ignition scenario where the kernel grows fast and leads to ignition while in the IRZ, 2) a delayed ignition scenario where the kernel growth is sufficient to avoid extinction in the high $Ka$ region, but not to ensure ignition there, which then occurs later in the SWJ. The existence of these two ignition modes is clearly visible in Fig.~\ref{fig:prem_transient2} illustrating ignition in the central cut-plane of the burner when sparking at $(r/D_{ext} = 0.0,z/D_{ext} = 1.4)$ where experimental $P_{ign}$ is 32\%. At $t = 4$ ms, $P_{ign} \simeq 15$ \% and the zone of high ignition success probability density $p_{s}(\boldsymbol{x},t)$ corresponds to upstream kernel trajectories inside the IRZ, i.e., the first scenario. Later at $t = 12$ ms the zone extends along trajectories in the SWJ, indicating delayed ignition of the second scenario. The temporal evolutions of $P_s(t)=\int_V p_{s}(\boldsymbol{x},t) dV$ and $P_f(t)=\int_V p_{f}(\boldsymbol{x},t) dV$ show as well the two modes, with a first increase of $P_s$ around $3$ ms, followed by a plateau before a second increase starting later around $7$ ms. These results highlight the ability of MIST to capture non-monotonic evolutions of the kernel size as its trajectory successively enters regions that promote or impede its growth. \begin{figure*}[h!] \centering \includegraphics[width=1.0\textwidth]{./FIG/Prem_IgnitSuccess_V2.pdf} \caption{$P$ Case. Probability density of successful ignition, $p_{s}(\boldsymbol{x},t)$ in a central cut-plane at two instants for sparking at $(r/D_{ext} = 0.0,z/D_{ext} = 1.4)$, and temporal evolution of $P_s(t)=\int_V p_{s}(\boldsymbol{x},t) dV$ and $P_f(t)=\int_V p_{f}(\boldsymbol{x},t) dV$.} \label{fig:prem_transient2} \end{figure*} \subsubsection{Non-premixed case} The ignition probability at three locations (shown in Fig.~\ref{fig:NPmap}) was directly computed by performing 20 LES of ignition in a previous study \cite{Esclapez:2015}. Table~\ref{tab:LES_XP_datas} reports the ignition probability obtained from experiment, LES and MIST. Both LES and MIST give very similar results, also close to measurements. Note that about 5 million CPU hours have been required for each data point with LES whereas it took only few minutes with MIST. \begin{table}[ht!] \centering \begin{tabular}{l c c c} \hline & Exp. & LES \cite{Esclapez:2015} & MIST \\\hline PT1 & 28-70\% & 40\% & 38\% \\ PT2 & 50\% & 48\% & 50\% \\ PT3 & 80\% & 72\% & 74\% \\ \hline \end{tabular} \caption{$NP$ case. Comparison of $P_{ign}$ from experiment \cite{Cordier:2013}, LES and MIST at the three sparking locations 1, 2 and 3 shown in Fig.~\ref{fig:NPmap}.} \label{tab:LES_XP_datas} \end{table} To analyze deeper the ignition scenarios, kernel trajectories are extracted from LES where each kernel is represented by the center of gravity of the volume defined by $T > 1300$ K. Both LES trajectories and the MIST PDF of presence $p(\boldsymbol{x},t)$ of all size kernels are projected on 2D-maps for the three sparking locations in Fig.~\ref{fig:compaLESMIST}. LES trajectories are colored with time to compare with the time evolution of $p(\boldsymbol{x},t)$. Results indicate that MIST qualitatively agrees with LES and is able to reproduce the different kernel motion trends associated with each sparking location: \begin{itemize} \item at PT1, the flame kernel first stays close to the stagnation point (until $\approx 1$ ms) and is eventually convected along the SWJ for successful events, \item at PT2, the sparking in the shear layer between the IRZ and the SWJ leads to two categories of kernel trajectories, either along the SWJ or trapped in the IRZ, \item at PT3, all trajectories mainly follow the SWJ, going downstream and rotating around the nozzle axis. \end{itemize} However it also highlights some limitations of the model. At the vicinity of PT1, although both LES and experiments have shown significant deformation and fragmentation of the kernel, MIST assumes that the kernel remains spherical. This difference can partially explain the wide range of instantaneous kernel trajectories observed in the LES, which is not captured by the dispersion of the trajectories in MIST. \begin{figure*}[h!] \centering \includegraphics[width=1.0\textwidth]{./FIG/Compa_model_LES_KCG.pdf} \caption{$NP$ case. Two-dimensional projection of $p(\boldsymbol{x},t)$ of all size kernels obtained from MIST (grayscale) with overlaid kernel trajectories obtained from LES (lines) colored by the time after ignition.} \label{fig:compaLESMIST} \end{figure*} \subsubsection{Spray case} As for case $NP$ in the previous section, MIST is compared to LES of ignition sequences, at the sparking location $(r/D_{ext} = 1.5, z/D_{ext} = 0.5)$. The experimental ignition probability found at this position is $50~\%$. Snapshots of the flame front (iso-$T=1500$ K) colored by the heat release rate are given in Fig.~\ref{fig:LES_spray} at different times after the spark, extracted from the LES of a successful ignition. Starting from the bottom of the CRZ, the kernel is first convected towards the injector by the recirculating flow (a). During this phase, the kernel grows as it meets favorable conditions. When arriving above the air inlet (b), the flame kernel subjected to very high velocity fluctuations, may rapidly quench. The kernel is then convected downstream by the SWJ (c) and is still strongly shredded in this turbulent zone. If able to survive, the kernel finally reaches the much favorable top part of the CRZ (d) after $10$ ms, where it grows fast to extend over the entire CRZ and the SWJ (e), and eventually ignites the full chamber. In this late ignition scenario the kernel convection plays a critical role. \begin{figure*}[h!] \centering \includegraphics[width=0.95\textwidth]{./FIG/LES_P4_spray} \caption{$SP$ case. Snapshots of LES for sparking at $(r/D_{ext} = 1.5,z/D_{ext} = 0.5)$. Flame front visualization (iso-$T=1500$ K) colored by heat release rate. Time after spark: (a) $1.7$ ms, (b) $3.7$ ms, (c) $7.7$ ms, (d) $13$ ms, (e) $16$ ms, (f) $20$ ms.} \label{fig:LES_spray} \end{figure*} The above LES sequence is to be compared with the prediction of MIST, illustrated in Figs.~\ref{fig:spray_isoC_rk} and~\ref{fig:spray_rkmean}. MIST predicts at this point an ignition probability of $40~\%$, close to the experimental value of $50~\%$. In Fig.~\ref{fig:spray_isoC_rk}, the cumulated iso-surface of all positions of the chamber where $r_s$ has been reached, independently of the time after spark, is very similar to Fig.~\ref{fig:LES_spray}~(e) showing that MIST is able to reconstruct the ignition scenario. \begin{figure*}[h!] \centering \includegraphics[width=0.3\textwidth]{./FIG/SP-side-view} \hspace*{2cm} \includegraphics[width=0.3\textwidth]{./FIG/SP-top-view} \caption{$SP$ case. Prediction of MIST for sparking at $(r/D_{ext} = 1.5,z/D_{ext} = 0.5)$ (red dot) : final ($t ~$ 12 ms) iso-surface of all positions where $r_s$ was reached. Left: side view; Right: top view.} \label{fig:spray_isoC_rk} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=0.95\textwidth]{./FIG/isoV_rkmean} \caption{$SP$ case. Volume rendering of the mean flame kernel radius after $1$ ms (a), $3$ ms (b) and $7$ ms (c). } \label{fig:spray_rkmean} \end{figure*} Figure~\ref{fig:spray_rkmean} provides a front view of the iso-volume of mean flame kernel radius above $0.01$ mm at three times after sparking. After $1$ ms (a), the kernel convection phase in the bottom part of the CRZ is found similar to the LES ignition sequence (Fig.~\ref{fig:LES_spray}a). At this early time, the mean kernel size is $\approx 3$ mm and progressively increases. After $3$ ms (b), the larger iso-volume indicates a dispersion of the kernel trajectories. Kernels staying longer in the favorable CRZ grow much more than those entering the adverse SWJ. This is demonstrated by $\bar{r_k}$ reaching $7$ mm in the CRZ while remaining below $\approx 5$ mm in the SWJ. The most advanced points of the iso-volume (towards the SJW) correspond to kernels leaving the CRZ most rapidly, thus having the lowest radius near $3$ mm. This is comparable to what can be observed from the LES in Fig.~\ref{fig:LES_spray}b. Finally after $7$ ms (c), kernels that stay longer in the CRZ reach $r_s = 8$ mm. On the contrary, kernels convected downstream in the SWJ grow more slowly as in Fig~\ref{fig:LES_spray}c and d. For this case again, it is remarkable to observe that MIST is able to recover the wide range of flame kernel trajectories and size evolutions, and the balance between kernel growth in favorable regions and kernel destruction by strong turbulence. \section{Conclusions} \label{sec_conclusion} In this work, a model for ignition statistics (MIST) is proposed in order to predict the ignition probability from a non-reacting flow solution. More specifically, MIST aims at predicting the success of creating a sufficiently large, self-sustained flame kernel during the first few milliseconds after energy deposit. MIST differs from previous ignition models in that it directly combines local flame extinction indicators with statistics of the flame kernel trajectories in order to include transient effects due to the flame kernel motion before ignition. In addition MIST does not need to compute multiple independent ignition events to build kernel trajectory statistics, thanks to a fully statistical approach. This allows to drastically reduce the computational cost, down to few minutes to build a full ignition map. The model is tested on an academic swirled burner operated in premixed, non-premixed and two-phase conditions. In all cases, the model is able to reproduce with good accuracy the ignition probability map obtained experimentally. Detailed analysis of the model behavior indicates that MIST provides valuable insights on the ignition success and failure mechanisms, consistent with the behaviors observed from multiple ignition sequences both experimentally and numerically. This good prediction and efficiency performances make MIST a very attractive tool for the optimization of the igniter position and conditions of real aeronautical combustion chambers. Further improvements of the model include a better description of the interactions between the flame kernel and the walls, a critical aspect of spark plug location in practical systems. \\ \noindent {\bf Acknowledgements} The authors thank M. Cordier, J. Marrero-Santiago, B. Renou and co-workers from CORIA for fruitful collaboration. This work was performed using HPC resources from GENCI-IDRIS (Grant 2013- x20132b5031) and TGCC (allocations 2016153551 and \& A0032B10157 made by PRACE and GENCI respectively).\\ \input{journaux-dec10.tex} \input{Esclapez_CF_MIST.bbl}
proofpile-arXiv_059-15732
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Motivation and Resolution} \subsection{Individualized predictions and transitional inference}\label{sec:unique} Predicting an individual's outcome, such as for personalized medicine, is an alluring proposition. Who would not want to know how a treatment would work for \textit{me} before such treatment even begins? But in order to test the effectiveness of a treatment, we will need some guinea pigs. But who can approximate \textit{me}? Someone with my genetic profiles, age, diet, exercise habit, and medical history? But how detailed should the medical history be? What about family medical history? And how extended should my ``family" be? The arrival of Big Data permits us to look into such questions at deeper levels than before, but it does not make our job easier in any fundamental way. Finding a proxy population to approximate an individual is inherently an ill-defined problem from a mathematical perspective, since each of us is defined by an essentially infinite number of attributes, denoted by $p=\infty$. The implied uniqueness of ``me" then renders $n=0$, that is, there will never be any genuine guinea pig for me. {\color{black} Epistemologically, this need of ``transition to the similar" has been pondered by philosophers from Galen to Hume \citep[e.g., see ][]{hankinson1987causes,hankinson1995growth}. For example, Galen, a physician and philosopher in the Roman Empire, wrote (see \citet{hankinson1987causes}): \begin{quote} ``In cases in which there is no history, or in which there is none of sufficient similarity, there is not much hope. And the same thing is true in the case of transference of one remedy from one ailment to another similar to it: one has a greater or smaller basis for expectation of success in proportion to the increase or decrease in similarity of the ailment, whether or not history is involved. And the same goes for the transference from one part of the body to another part: expectation of success varies in direct proportion to the similarity." \end{quote} Galen's framing is essentially a statistical one, with a nice blend of Bayesian (the reliance on history) and frequentist (the emphasis on proportions regardless of history), albeit long before any of these qualifying terms was invented. Perhaps it is a surprise then that, to the best of our knowledge, there is no statistical theory for this kind of \textit{transitional inference} \citep{hankinson1995growth}. We surmise that this absence is largely due to the fact that transitional inference goes outside of our traditional inductive framework since it is not about inferring a population from samples of individuals, but rather about predicting individuals' outcomes by learning from a proxy population. The notion of \textit{similarity}, central to transitional inference, is also a challenging one to metricize in general.} However, the concept of multi-resolution (MR) analysis in engineering and applied mathematics, such as wavelets \cite[see][]{meyer1993wavelets,daubechies1992ten}, turns out to be rather useful for establishing such a theoretical framework. For wavelets, variations in data are first decomposed according to their resolution levels. For image data, the resolution level is the pixel resolution as we ordinarily define, and the concept of multi-resolution can be easily visualized by the common practice of zooming in and out when taking pictures. Zooming too much or too little both would result in losing seeing the big picture, figuratively and literally. Our central task is then to identify a suitable \textit{primary resolution} to separate signals (i.e., lower-resolution wavelet coefficients) from noise (i.e., higher-resolution wavelet coefficients); see \citet{donoho1995wavelet} and especially \citet{johnstone2011gaussian} for a survey. The choice of primary resolution thus determines the unit of our inference, that is, the degree of individualization. The search for the primary resolution is generally a quest for an age-old bias-variance trade-off: estimating more precisely a less relevant individual assessment versus estimating less precisely a more relevant one. Because the MR framework permits the resolution level to be potentially infinite, it can also be viewed as the predictive counterpart of the estimation method of sieves for dealing with infinite-dimension models. In order to reveal as early as possible what this framework can offer, we follow a reviewer's suggestion to defer a literature review and comparison with the standard large-$p$-small-$n$ framework to the end of our article. \subsection{A fundamental resolution decomposition}\label{sec:identity} To set up our MR framework, we consider an outcome variable $Y$ sharing the same probability space $(\Omega, {\cal F}, P)$ as an information filtration $\{{\cal F}_r, r=0, 1, ..., \}$, where ${\cal F}_{r-1}\subset {\cal F}_{r}$, and $r$ indexes our resolution level. Here ${\cal F}_0$ corresponds to a population of interest (e.g., those who are infected by a certain virus) from where target individuals come, and ${\cal F}_{\infty}=\cup_{r=0}^\infty {\cal F}_r$ permits us to define (unique) individuality. For example, ${\cal F}_r$ is the $\sigma$-field generated by covariates $\{X_0, X_1, \ldots, X_r\}$, and hence determining the primary resolution is the same as determining how many covariates should be used for predicting $Y$ for a given information filtration (see Section~\ref{sec:ordering} for the issue of ordering the covariates). Let $\mathbb{E}(\cdot)$ and $\mathbb{V}(\cdot)$ denote mean and variance, respectively. Denote $\mu_r=\mathbb{E}[Y|{\cal F}_r]$ and $\sigma_r^2=\mathbb{V}[Y|{\cal F}_r]$ for all $r$'s, including $r=0$ and $r=\infty$ (and assume these are well defined). Then by repeatedly applying the iterative law $\mathbb{V}[Y|{\cal F}_r]=\mathbb{E}[\mathbb{V}( Y|{\cal F}_{s})|{\cal F}_r]+\mathbb{V}[\mathbb{E}( Y|{\cal F}_{s})|{\cal F}_{r}]$, where $s>r$, we have the usual ANOVA decomposition \citep{meng2014trio}, \begin{equation}\label{eq:keyi} \sigma^2_r=\mathbb{E}[\sigma^2_{\infty}|{\cal F}_r]+\sum_{i=r}^{\infty}\mathbb{E}[(\mu_{i+1}-\mu_i)^2|{\cal F}_r], \quad {\rm for\ any }\ r\ge 0. \end{equation} Decomposition \eqref{eq:keyi} reminds us that the usual dichotomy between \textit{variance}, as a measure of random variations, and \textit{bias}, as a measure of systematic differences, is an artificial one, except possibly at the infinite resolution level. That is, the variance at any particular resolution level is merely the accumulation of all the (squared consecutive) systematic differences, i.e., biases, at higher resolution levels, plus $\sigma^2_\infty$, the \textit{intrinsic variance}. Conceptually $\sigma^2_\infty$ cannot be ascertained from any empirical data, because we can never be sure whether the residual variance from whatever model we fit is due to $\sigma^2_\infty$ or to a limitation of our always finite amount of data. It therefore seems inconsequential to set $\sigma_\infty^2=0$ since we can never prove it false. This proposition should be particularly acceptable to those who believe that the world is ultimately deterministic once all its operating mechanisms are measured and understood \citep[e.g., see][]{peat2002certainty}. However, as we shall reveal in this article, whether or not to set $\sigma_\infty^2$ to zero has profound implications on the bias-variance trade-off phenomenon. To the best of our knowledge, the statistical literature has not investigated this phenomenon for chaotic dynamic systems \citep[e.g.,][]{devaney2018introduction}, since when $\sigma_\infty^2=0$, the setup here enters the realm of deterministic but potentially chaotic systems. The corresponding findings therefore may be counter-intuitive (initially) to statisticians, but they might provide a bridge to the growing literature in machine learning that casts doubts on the applicability of bias-variance trade-off, especially the literature surrounding the phenomenon of ``double descent" \citep[e.g.,][]{belkin2019reconciling, belkin2019two, hastie2019surprises,nakkiran2019deep}, which we shall explain and extend to ``multiple descents" later in this article. Regardless of how we treat $\sigma^2_\infty$, declaring that a resolution level $R$ is our primary resolution implies that all the information conveyed by variations at resolution levels higher than $R$ can be effectively ignored when predicting $Y$. The MR formulation therefore permits us to quantify the degree of individualization, and to be explicit about the two contributing factors of our overall prediction error: (I) the resolution bias due to choosing a finite $R$; and (II) the estimation error at the given resolution $R$. The MR framework therefore integrates the model selection step (I) with the model estimation step (II), and hence it does not need to treat the issue of selection post-hoc \citep[e.g.,][]{berk2013,lee2016exact,tibshirani2016exact}. Furthermore, since the filtration $\{{\cal F}_r, r=0, 1, \ldots \}$ forms a cumulative ``information basis", the choice of optimal $R_n$ for a given data set with size $n$ is in the same spirit as finding a \textit{sparse representation} in wavelets, for which there is a large literature \cite[see][]{poggio1998sparse,donoho2003optimally}, though here perhaps it is more appropriate to term it as \textit{parsimonious representation}. \subsection{Time-honored intuitions, and timely new insights?} Our findings confirm some time-honored intuitions and build new ones. Specifically, in Section \ref{sec:multi_resol_pred} we first decompose the total prediction error into three components: the ultimate risk, the resolution bias and the estimation error. We then provide an overview and highlight on how the optimal resolution depends on the decay rates of the resolution bias and the corresponding estimation error under a particular ordering of covariates, respectively, in the stochastic world (i.e., $\sigma^2_\infty>0$) and deterministic world (i.e., $\sigma^2_\infty=0$). Section~\ref{sec:multi_resol_pred} concludes with some theoretical insights on the issue of ordering the covariates. Sections~\ref{sec:theory_linear} and \ref{sec:theory_tree} then establish our general results with an infinite number of continuous and categorical predictors, and illustrates them with linear regression and tree regression, respectively. In particular, in Sections~\ref{sec:linear_zero_tau} and \ref{sec:determin_categorial}, we report some intriguing findings when $\sigma_\infty^2=0$ respectively for these two regression models. In this world without variance, the optimal resolution may rightly prefer the direction of over-fitting in the traditional sense; indeed the optimal resolution level can even approach infinity. But this preference does not violate the time-honored bias-variance trade-off principle because, without variance, the optimal trade-off may have to put all its eggs in the basket of bias. We also find that the predictive error curve can exhibit double descents or even arbitrarily many descents without ever entering the over-parameterized realm. These findings might provide a new angle to investigate very flexible and saturated models, such as deep learning networks, to understand their seemingly magical ability to resist over-fitting. That is, with a huge amount of data, it is conceivable that an exceedingly rich and flexible deterministic model class can learn to practically exhaust all patterns detectable with reasonable chances in reality (which can be far fewer than in theory). In such cases, we would not need $\sigma_\infty^2>0$ to absorb the imperfection of the model, effectively rendering it a deterministic system, a system that prefers ``over-fitting" in the traditional sense. This is also explored empirically in Section \ref{sec:finite}, where we summarize a simulation study with linear models that investigates the practicality of the MR approach that employs cross validation and other methods for selecting the primary resolution in practice. The details of the study, as well as all the technical proofs in our article, are deferred to the Appendices. Section~\ref{sec:main_practical} completes our exploration by making connections to relevant literature and discussing further work. \section{A Multi-Resolution Framework}\label{sec:multi_resol_pred} \subsection{Prediction with potentially infinitely many predictors} To start, let $\odot$ be a member of a target population, which can be as small as a single individual, and $Y(\odot)$ be a univariate response from $\odot$, which can be discrete (e.g., a treatment success indicator) or continuous (e.g., the change of the cholesterol level due to a treatment). Typically the investigators have some prior knowledge about which set of the individual's attributes play more critical roles in determining $Y$. But, philosophically and practically, no one can be certain about what constitutes the complete set of relevant predictors. Statistically we can model such a situation by requiring the distribution of $Y(\odot)$ to depend on potentially infinitely many attributes of $\odot$, denoted by $\vec{\bm{X}}_{\infty}(\odot)=\{X_0(\odot), X_1(\odot), X_2(\odot), \ldots \}$. In reality we can never observe infinitely many covariates, but the arrival of the digital age has created many situations where we have far more predictors than the sample size. Our job is to seek a small subset of the predictors of the outcome with accuracy that makes our prediction useful. We use $f_{\odot}$ to denote the joint probability mass/density function of the response and covariates for the target individual $\odot$. To learn about $f_{\odot}$, especially the dependence of $Y(\odot)$ on $\vec{\bm{X}}_{\infty}(\odot)$, we need to collect a training set $\mathcal{T}_n=\{(y_i,\vec{\bm{x}}_{i \infty}): i=1,2,\ldots,n\}$, which are (assumed to be) independent and identically distributed (i.i.d.) samples from a training (proxy) population. Clearly the phrase ``training" implies that we need some assumptions to link $\mathcal{T}_n$ to the target population. The ideal assumption of course is that $f_{\odot}$ equals the joint probability mass/density function $f$ of $(Y, \vec{\bm{X}}_\infty)$ for the training population. Whereas all attempts should be made to mimic the target population when we form the training population, it is wise to permit our framework sufficient flexibility to admit cases where $f$ may differ from $f_{\odot}$ but in an approximately known way. Mathematically, this flexibility can be handled by introducing a weight function \begin{equation}\label{eq:weight} w_{\odot}(Y, \vec{\bm{X}}_{\infty})= \frac{f_{\odot}(Y, \vec{\bm{X}}_{\infty})}{f(Y, \vec{\bm{X}}_{\infty})}=\frac{f_{\odot}(Y| \vec{\bm{X}}_{\infty})}{f(Y|\vec{\bm{X}}_{\infty})}\frac{f_{\odot}(\vec{\bm{X}}_{\infty})}{f(\vec{\bm{X}}_{\infty})}. \end{equation} Normally it is almost inevitable to assume $f_{\odot}(Y| \vec{\bm{X}}_{\infty}) \approx f(Y|\vec{\bm{X}}_{\infty})$, that is, the (stochastic) relationships between the outcome and the predictors for the target population and the training population must be approximately the same, because otherwise our selection of the training sample is a very poor one. Consequently, (\ref{eq:weight}) implies $w_{\odot}(Y, \vec{\bm{X}}_{\infty})\approx f_{\odot}(\vec{\bm{X}}_{\infty})/f(\vec{\bm{X}}_{\infty})$, which is easier to estimate since it merely involves adjusting the marginal distribution of the $\vec{\bm{X}}_\infty$, known as a ``covariate shift" in the literature \citep[see, e.g.,][]{learningdiff2007, sugiyama2012machine}. {\color{black} However, when $\odot$ is indeed a single individual or beyond the support of the training population, the weight $w_{\odot}(Y, \vec{\bm{X}}_{\infty})$ is not defined without lowering the resolution level for evaluation; see \citet{meng2020}. We leave the choice of weights for a future study, as our focus in this article is on the choice of optimal resolutions with given weight functions.} To avoid confusion, we use $\mathbb{E}_{\odot}$ and $\mathbb{E}$ to denote the expectations over the target and the training populations respectively. To evaluate the prediction performance of a prediction function $\hat{y}(\vec{\bm{X}}_\infty)$, we can adopt a loss function $\mathcal{L}(y, \hat{y})$, which is problem-dependent. Clearly, we can minimize the expected loss $\mathbb{E}_{\odot}[\mathcal{L}(Y, \hat{y}(\vec{\bm{X}}_\infty ))]$ via minimizing $\mathbb{E}[\mathcal{L}_{\odot}(Y, \hat{y}(\vec{\bm{X}}_\infty ))]$, where $\mathcal{L}_{\odot}(Y, \hat{y}(\vec{\bm{X}}_\infty )) \equiv \mathcal{L}(Y, \hat{y}(\vec{\bm{X}}_\infty )) w_{\odot}(Y, \vec{\bm{X}}_\infty)$; the subscript $\odot$ indicates its dependence on the utility of prediction and the target population of interest. With this setup, we proceed as follows. At each resolution $r$, we restrict our prediction to a family of functions $\{g(\vec{\bm{x}}_{r}; \bm{\theta}_r)\}$, where $\vec{\bm{x}}_{r}=(x_0, \ldots, x_r)$. For notational simplicity, we suppress the explicit dependence of $g(\cdot)$ on $r$, but rather use the inputs $\vec{\bm{x}}_{r}$ and $\bm{\theta}_r$ to emphasize such dependence implicitly. Note that $\bm{\theta}_r$ denotes a generic parameter whose dimension can vary with $r$. For example, $\dim(\bm{\theta}_r) = \binom{r+2}{2}$ if $g(\vec{\bm{x}}_{r};\bm{\theta}_r)$ is a linear function of covariates up to resolution $r$ and of all their quadratic terms and pairwise interactions. Generally, we will choose $g(\cdot)$ such that the family of prediction functions becomes richer as resolution increases. That is, for any $r< r'$, any prediction function $g(\vec{\bm{x}}_{r}; \bm{\theta}_r)$ at resolution $r$, viewed as a function of $\vec{\bm{x}}_{r'}$, belongs to the family of prediction functions at resolution $r'$. At each resolution $r$, the optimal prediction is then $g(\vec{\bm{x}}_{r}; \bm{\theta}_r^*)$, with $ \bm{\theta}^*_r \equiv \argmin_{\bm{\theta}_r} \mathbb{E}[\mathcal{L}_{\odot}( Y, g(\vec{\bm{X}}_{r}; \bm{\theta}_r))]. $ A usual estimator for $\bm{\theta}^*_r$ is obtained by minimizing the empirical risk: $ \hat{\bm{\theta}}_r \equiv \arg\min_{\bm{\theta}_r} \sum_{i=1}^{n} \mathcal{L}_{\odot}(y_i, g(\vec{\bm{x}}_{ir};\bm{\theta}_r)). $ Hence, once we choose the primary resolution $R$, we predict $Y$ by $g(\vec{\bm{x}}_{R}; \hat{\bm{\theta}}_R)$ for an individual with covariate $\bm{x}_\infty$, and estimate the prediction error $ \mathbb{E}[\mathcal{L}_{\odot}( Y, g(\vec{\bm{X}}_{R}; {\bm{\hat\theta}}_R) )] $ by the empirical risk $n^{-1}\sum_{i=1}^{n} \mathcal{L}_{\odot}(y_i, g(\vec{\bm{x}}_{iR};\bm{\hat\theta}_R)) $, or by cross-validation. \subsection{A trio decomposition of the prediction error}\label{sec:decomposition} To better understand the prediction error at a resolution $R$, we decompose $\mathbb{E}[\mathcal{L}_{\odot}( Y, g(\vec{\bm{X}}_{R}; \hat{\bm{\theta}}_R) )]$ into three parts: the ultimate risk, the resolution bias at resolution $R$, and the estimation error at resolution $R$. {\color{black} The \textit{ultimate risk} is $\tau^2 \equiv \mathbb{E}[\mathcal{L}_{\odot}(Y, g(\vec{\bm{X}}_\infty; \bm{\theta}_{\infty}^*))],$ which depends on the families of functions used for prediction. Specifically, it has two sources, one due to model misspecification and the other due to the \textit{intrinsic variation} at the infinite resolution, i.e., $f(Y|\vec{\bm{X}}_{\infty})$. That is, the intrinsic variance $\sigma^2_{\infty}=\mathbb{V}(Y|\vec{\bm{X}}_{\infty})$ can be positive (or even infinity) in a stochastic world.} The \textit{resolution bias} at resolution $R$ then is \begin{align*} A(R) & = \sum_{r=R+1}^\infty \left\{ \mathbb{E}[\mathcal{L}_{\odot}(Y, g(\vec{\bm{X}}_{r-1};\bm{\theta}_{r-1}^*))] - \mathbb{E}[\mathcal{L}_{\odot}(Y, g(\vec{\bm{X}}_{r};\bm{\theta}_{r}^*))] \right\}. \end{align*} When the family of prediction functions becomes richer as resolution increases, $A(R)$ is non-increasing in $R$ and approaches zero as $R\rightarrow\infty$, i.e., $\lim_{R\rightarrow \infty} A(R) = 0.$ Finally, the \textit{estimation error} at resolution $R$, \begin{align*} \varepsilon(R,\mathcal{T}_n) = \mathbb{E}[\mathcal{L}_{\odot}(Y, g(\vec{\bm{X}}_{R};\hat{\bm{\theta}}_R))] - \mathbb{E}[\mathcal{L}_{\odot}(Y, g(\vec{\bm{X}}_{R};\bm{\theta}_R^*))], \end{align*} is non-negative by the optimality of $\bm{\theta}_R^*.$ From the above, the prediction error at resolution $R$ using training set $\mathcal{T}_n$ can be decomposed as \begin{align}\label{eq:decomposition} \mathbb{E}[\mathcal{L}_{\odot}( Y, g(\vec{\bm{X}}_{R}; \hat{\bm{\theta}}_R))] & = \tau^2 + A(R) + \varepsilon(R,\mathcal{T}_n). \end{align} As we shall show shortly, theoretically, we can gain good insight by considering the averaged version of this decomposition, that is, \begin{align}\label{eq:decomposition_average_training} \mathbb{E}_n\left[\mathbb{E}[\mathcal{L}_{\odot}( Y, g(\vec{\bm{X}}_{R}; \hat{\bm{\theta}}_R))]\right] & = \tau^2 + A(R) + \varepsilon(R, n), \end{align} where, with slight abuse of notation, $ \varepsilon(R, n)=\mathbb{E}_n [\varepsilon(R,\mathcal{T}_n )]$, and $\mathbb{E}_n$ denotes the expectation over all training sets of size $n$. {\color{black} It is worthy noting that \eqref{eq:decomposition} is an extension of the ANOVA decomposition \eqref{eq:keyi} in expectation, with \eqref{eq:keyi} being a special case with $\mathcal{L}_{\odot}(y,\hat{y}) = (y-\hat{y})^2$ and $g(\bm{X}_{\vec{r}}; \bm{\theta}_r^*) = \mathbb{E}(Y \mid \vec{\bm{X}}_r)$ for $r\ge 1$, i.e., the prediction functions are correctly specified. Under this special case, the ultimate risk $\tau^2$ reduces to $\mathbb{E}(\sigma^2_{\infty})$. We remark that in general $\tau^2 \ge \mathbb{E}(\sigma^2_{\infty})$, with equality holds when we correctly specified the prediction functions. Because $\sigma^2_{\infty} \ge 0$, a zero $\tau^2$ then must imply $\sigma^2_{\infty}=0$ (almost surely), i.e., a deterministic world without variance. Here, as in \eqref{eq:keyi}, $\sigma^2_r=\mathbb{V}[Y \mid \bm{X}_{\vec{r}}]$ and $\mu_r=\mathbb{E}[Y \mid \bm{X}_{\vec{r}}] = g(\bm{X}_{\vec{r}}; \bm{\theta}_r^*)$, which is estimated by $\hat{\mu}_r = g(\bm{X}_{\vec{r}}; \hat{\bm{\theta}}_r)$. The resolution bias at resolution $R$ reduces to $\sum_{r=R}^{\infty}[\mathbb{E}(\sigma^2_r) - \mathbb{E}(\sigma^2_{r+1})] = \sum_{r=R}^{\infty}\mathbb{E} (\mu_{r+1}-\mu_r)^2$, and the estimation error to $\mathbb{E}(\hat{\mu}_R - \mu_R)^2$. Consequently, \eqref{eq:decomposition} reduces to \begin{align} & \mathbb{E}(\sigma_R^2) + \mathbb{E}(\hat{\mu}_R - \mu_R)^2 =\mathbb{E} (\sigma^2_{\infty}) +\sum_{r=R}^{\infty}\mathbb{E} (\mu_{r+1}-\mu_r)^2 + \mathbb{E}(\hat{\mu}_R - \mu_R)^2, \label{eq:quad} \end{align} which is equivalent to \eqref{eq:keyi} by further averaging over ${\cal F}_r$ (i.e., the conditioning in \eqref{eq:keyi}).} Because in \eqref{eq:decomposition} and \eqref{eq:decomposition_average_training} the ultimate risk is not affected by the resolution (under the assumption that the function form is the same at the infinite resolution), for any training set $\mathcal{T}_n$, the optimal primary resolution that minimizes the prediction error in \eqref{eq:decomposition} is \begin{align*} R_{\mathcal{T}_n,\text{opt}} & = \arg\min_{R} \mathbb{E}[\mathcal{L}_{\odot}( Y, g(\vec{\bm{X}}_{R}; \hat{\bm{\theta}}_R) )] = \arg\min_{R}\left[ A(R) + \varepsilon(R,\mathcal{T}_n) \right]. \end{align*} Similarly, the optimal primary resolution that minimizes the prediction error in \eqref{eq:decomposition_average_training} is \begin{align*} R_{n,\text{opt}} & = \arg\min_{R} \mathbb{E}_n\left[ \mathbb{E}[\mathcal{L}_{\odot}( Y, g(\vec{\bm{X}}_{R}; \hat{\bm{\theta}}_R) )]\right] = \arg\min_{R}\left[ A(R) + \varepsilon(R, n) \right]. \end{align*} Studying $R_{\mathcal{T}_n,\text{opt}}$ or $R_{n,\text{opt}}$ for a particular training set $\mathcal{T}_n$ or a particular size $n$ is generally difficult. We therefore resort to the usual asymptotic strategy. That is, as $n$ goes to infinity, we seek a sequence $\{R_n\}_{n=1}^\infty$ such that $A(R_n)+ \varepsilon(R_n,\mathcal{T}_n)$ or $A(R_n) + \varepsilon(R_n, n)$ converges to zero (in probability) as fast as possible. We will adopt the notation $a_n \asymp b_n$ if two sequences $\{a_n\}$ and $\{b_n\}$ satisfy $a_n = O(b_n)$ and $b_n = O(a_n)$, and similarly, $\tilde{a}_n \overset{\mathbb{P}}{\asymp} \tilde{b}_n$ if random sequences $\{\tilde{a}_n\}$ and $\{\tilde{b}_n\}$ satisfy $\tilde{a}_n = O_{\mathbb{P}}(\tilde{b}_n)$ and $\tilde{b}_n = O_{\mathbb{P}}(\tilde{a}_n)$, using the usual definition of $O_{\mathbb{P}}$. We also use the notation $a_n \gtrsim b_n$ for $b_n=O(a_n)$. \subsection{Optimal resolution and learning rate in the stochastic world} \label{sec:rate_optimal_resolution} Intuitively, there must be a trade-off in determining the optimal $R_n$. To control the resolution bias $A(R_n)$, we desire large $R_n$ because of the monotonically decreasing nature of $A(R)$. For $A(R_n)$, we will consider four scenarios, representing four different levels of sparsity. However, to control the estimation error, we want small $R_n$ to reduce the number of model parameters to be estimated. When the intrinsic variance $\sigma_\infty^2>0$, under some regularity conditions (e.g., our estimation methods are efficient), we have the usual $\varepsilon(R_n, n) \asymp \dim (\bm{\theta}_{R_n})/n$ asymptotics. Hence we need $\dim (\bm{\theta}_{R_n})=o(n)$ to ensure $\varepsilon(R_n,n)$ converges to zero as $n \rightarrow\infty$. \begin{table} \centering \caption{Rate-optimal $R_n$ and minimal error $L_n\equiv A(R_n) + \varepsilon(R_n,n)$ in a stochastic world. All $c_n$'s are of $O(1)$ but satisfy different constraints as specified in Theorem~\ref{th:cont} (Section~\ref{sec:gen_result_linear}) and Theorem~\ref{th:disc} (Section~\ref{sec:general_regression_tree}).} \label{tab:optimal_rate} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|c|c|c|c|} \hline \diagbox{$\varepsilon(r,n)$}{$A(r)$} & $\substack{{\rm Hard\ Thresholding}\\ 1_{\{r < r_0\}}}$ & $\substack{{\rm Exponential\ Decay}\\ e^{-\xi r} \ (\xi>0) }$ & $\substack{{\rm Polynomial\ Decay} \\ r^{-\xi} \ (\xi>0) }$ & $\substack{{\rm Logarithmic\ Decay}\\ \log^{-\xi}(r) \ (\xi>0) }$ \\ \hline Polynomial in $r$ & $R_n \asymp c_n \ge r_0$ & $c_n\log{n}$ & $c_n n^{1/(\xi+\alpha)}$ & $\frac{c_nn^{1/\alpha}}{\log^{\xi/\alpha}(n)}$\\ $r^{\alpha}/{n}$ $(\alpha > 0)$ & $L_n\asymp 1/n$ & $\log^{\alpha}(n)/n$ & $n^{-\xi/(\xi+\alpha)}$ & $[\log(n)]^{-\xi}$ \\ \hline Exponential in $r$ & $R_n \asymp c_n \ge r_0$ & $\frac{\log{n}+\log c_n}{\xi+\log\alpha}$ & $c_n\log(n)$ & $c_n\log(n)^{}$\\ $\alpha^r/n$ $(\alpha>1)$ & $L_n\asymp 1/n$ & $ n^{-\xi/(\xi+\log\alpha)}$ & {$[\log(n)]^{-\xi}$} & $[\log\log(n)]^{-\xi}$\\ \hline \end{tabular}% } \end{table} Table~\ref{tab:optimal_rate} provides a high-level preview of the general asymptotic results under the above setting, with four (common) choices of the decay rate for $A(r)$. What do these asymptotic results tell us? First, the hard-thresholding cases correspond to the classical parametric setting, with a fixed number ($r_0$) of predictors. Hence, as long as our resolution level $R_n$ exceeds $r_0$ (arbitrarily often), we will reach the classical $n^{-1}$ error rate, excluding the ultimate risk (which includes the intrinsic variation). Second, the rate-optimal resolution $R_n$---and hence the minimal prediction error---depends critically on both the decay rate $A(r)$ and estimation error $\varepsilon(r, n)$. When $\varepsilon(r, n)$ grows polynomially with the resolution level (e.g., the continuous covariates cases), we can still practically achieve the $n^{-1}$ rate when $A(r)$ decays exponentially, because the price we pay is merely a $\log^\alpha(n)$ term. However, if $\varepsilon(r, n)$ grows exponentially (e.g., with discrete covariates), then although $R_n$ is still practically of $\log(n)$ type, the parametric error rate $n^{-1}$ is no longer achievable even if $A(r)$ decays exponentially. Instead, we can achieve only a non-parametric like error rate in the form of $n^{-\xi/(\xi+\log\alpha)}$, which reduces to $n^{-1}$ only if the decay rate parameter $\xi$ for $A(r)$ goes to infinity. Third, when $A(r)$ decays polynomially, $R_n$ takes on different rate forms depending on how the estimation error varies with the resolution level $r$, that is, (A) polynomial in $n$ for polynomial estimation error versus (B) $\log(n)$ for exponential estimation error. More importantly, the difference in the corresponding minimal prediction errors tells us that in case (A), the individualized prediction and learning rate is slow but still practical. However, case (B) belongs to the situation where the individualized learning rate is too slow to be useful. The same is true once the decay rate is logarithmic because then the prediction error rate is no better than that of case (B); see the last column of Table~\ref{tab:optimal_rate}. Therefore, among the eight scenarios in Table~\ref{tab:optimal_rate}, only the first five (counting first top to bottom then left to right) of these permit practical individualized learning. Here we give a side note on the asymptotic expression in Table \ref{tab:optimal_rate}. First, a more rigorous expression for the polynomial estimation error is $\varepsilon(r,n) \asymp \max\{r^\alpha,1\}/n$. We simply use $r^\alpha/n$ not only for descriptive convenience, but also since $r\ge 1$ is required for achieving rate optimal prediction when $A(0)>0$. Second, the decay rates for resolution biases, e.g., $r^{-\xi}$ and $\log^{-\xi}(r)$, may be well-defined only for $r$ larger than a certain value. Whenever such a quantity is not prescribed, we can view it as a finite positive constant. Again, this complication has little relevance for our asymptotic theory for the rate-optimal resolution, which must go to infinity as $n\rightarrow \infty$ when $A(r)>0$ for any finite $r$. \subsection{Optimal resolution and learning rate in the deterministic world} \label{sec:rate_optimal_resolution_zero} \begin{table} \centering \caption{Rate-optimal $R_n$ and minimal error $L_n\equiv \text{PE}_n$ in a deterministic world. All $c_n$'s are of $O(1)$ but satisfy different constraints as specified in Theorem~\ref{th:contzero} (Section~\ref{sec:linear_zero_tau}), Theorems~\ref{thm:binary_zero_tau} and \ref{thm:lower_bound_binary_tau2_zero} (Section~\ref{sec:determin_categorial}). Note: like in Table~\ref{tab:optimal_rate}, $\xi>0$. In some cases, the forms of rate-optimal $R_n$ are only sufficient but not necessary for achieving the optimal rate. }\label{tab:optimal_rate_zero} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|c|c|c|c|} \hline \diagbox{Model}{$A(r)$} & $\substack{{\rm Hard\ Thresholding}\\ 1_{\{r < r_0\}}}$ & $\substack{{\rm Exponential\ Decay}\\ e^{-\xi r} }$ & $\substack{{\rm Polynomial\ Decay} \\ r^{-\xi} }$ & $\substack{{\rm Logarithmic\ Decay}\\ \log^{-\xi}(r) } $ \\ \hline {Linear} & $n-3 \ge R_n \ge r_0$ & {$R_n=n - c_n$} & {$c_n n$} & {$c_n n^k$, $k\in (0,1]$} \\ {regression} & {$L_n=0$} & {$L_n\asymp n e^{-\xi n}$} & {$n^{-\xi}$} & {$[\log(n)]^{-\xi}$} \\ \hline Regression\ tree& $R_n \ge r_0$ & {$ \begin{cases} \gtrsim c_n \log (n), & \xi > \log(M) \\ = c_n \log (n), & \xi = \log(M) \\ = c_n\log (n), & \xi <\log(M) \end{cases} $} & {\small$c_n\log^{}(n)$} & {\small$c_n\log(n)^{}$} \\ with \ predictors &{} &{} &{} & \\ $\substack{X_i's\ \text{are\ i.i.d.} \\ {\rm Uniform}\{1, \ldots, M\}}$ & $L_n\asymp (1-M^{-r_0})^n$ & $\begin{cases} \lesssim n^{-1}, & \xi > \log(M)\\ \lesssim n^{-1}\log(n), &\xi = \log(M)\\ n^{-\xi/\log(M)}, & \xi < \log(M) \end{cases} $ & $[\log(n)]^{-\xi}$& $[\log\log(n)]^{-\xi}$\\ \hline \end{tabular } \end{table} The case with $\sigma^2_\infty=0$ or more precisely zero ultimate risk, however, behaves rather differently, and will be studied in Sections \ref{sec:linear_zero_tau} and \ref{sec:determin_categorial} for two popular models. We restrict to specific models because we have not been able to obtain general results parallel to those in Table~\ref{tab:optimal_rate}. But even with these specific results, we already see asymptotic behaviors, as revealed in Table~\ref{tab:optimal_rate_zero}, that are quite different from those in Table~\ref{tab:optimal_rate}. The trivial ones are for hard thresholding, where for the linear model, as long as sample sizes are large enough to solve the linear system, we will have zero error. Similarly, for the regression tree case, where the only possible error is when no exact match of the target case exists with respect to the $r_0$ important predictors in the training sample of size $n$. The probability of this occurring is exactly $(1-M^{-r_0})^n$ under our model assumption that all predictors $X_j$'s are independently and identically distributed as uniform on $\{1, 2, \ldots, M\}$, a mathematically convenient assumption that permits us to obtain analytical results. The more interesting cases are when $A(r)$ decays exponentially, which permits the optimal $R_n$ to be infinity; {\color{black} for example, for the regression tree model, when the resolution bias decays exponentially with $\xi>\log(M)$, choosing $R_n = \infty$ can lead to prediction error no worse than order $n^{-1}$.} That is, we are not worried about over-fitting because the benefit from exact matching outweighs the imprecision in solving, say, the linear system. This phenomenon does not occur when we restrict ourselves to statistical models with a finite number of predictors, which would force us to adopt an error term to capture the unexplained residual variations in the outcome variable. With an infinite number of predictors, there is at least a theoretical possibility that collectively they can explain all the variations in the outcome variable. There is no free lunch, however, as this full-explanatory power requires that the predictive model is specified correctly. Nevertheless, the discovery of this phenomena by permitting models with an infinite number of predictors should remind us of the value of exploring this line of thinking, as it might lead to alternative insights into why certain highly saturated black box models (e.g., deep learning networks) can have a seemingly over-fitting resistant nature. We shall explore this line of thinking in Section~\ref{sec:double_descent}, where we show how easily we can go beyond the intriguing ``double descents" phenomenon \citep[e.g.,][]{belkin2019reconciling,hastie2019surprises,nakkiran2019deep} in the deterministic world with infinitely many predictors, without even having to actually enter the realm of over-fitting. \subsection{The impact of ordering}\label{sec:ordering} So far we have assumed that the order of the covariates is pre-determined. In reality, the investigators may have some ``low resolution'' knowledge of the importance of \textit{groups} of the covariates (e.g., age and gender are typically among the predictors to be included in predicting health outcome). However, they often do not possess the refined knowledge to specify the exact order of the covariates in terms of their predictive power (if they did, the problem would be much easier). Mathematically, when the resolution levels change, we can change all the covariates included in the model. But to utilize our partial knowledge, however imprecise, we wish to investigate the dependence of prediction error on the order of the covariates, and in particular the degree of mis-ordering that can fundamentally alter the prediction error rate. That is, how much misspecification of the order can we tolerate before it really matters? Assume that the family of prediction functions becomes richer as resolution increases, and they are invariant to the ordering of the covariates, i.e., for any $r$ and any permutation $\pi$ of $\{0, 1, 2, \ldots, r\}$, the families of functions $\{g(\vec{\bm{x}}_r; \bm{\theta}_r)\}$ and $\{g(\vec{\bm{x}}_{\pi(r)}; \bm{\theta}_r)\}$ are the same, where $\vec{\bm{x}}_{\pi(r)} \equiv (x_{\pi(0)}, x_{\pi(1)}, \ldots, x_{\pi(r)})$. Consequently, the ultimate risk $ \tau^2=\mathbb{E}[\mathcal{L}_{\odot}(Y, g(\vec{\bm{X}}_\infty; \bm{\theta}_{\infty}^*))]$ is invariant to the ordering of covariates. This is most clearly seen under squared loss and correctly specified conditional mean function, where $\tau^2=\mathbb{E}(\sigma_\infty^2)$, as discussed prior to arriving at \eqref{eq:quad}. Below we will focus on the resolution bias and estimation error. We begin by considering a specific ordering of the covariates, $\{X_0, X_1, X_2,\ldots\}$, identified with its resolution bias $A(\cdot)$, estimation error $\varepsilon(\cdot, n)$, and rate-optimal resolution $R_n$. Let $A'$, $\varepsilon'$ and $R_n'$ be their counterparts under a new ordering $\{X'_{0}, X'_{1}, \ldots\}$. Generally, the estimation errors $\varepsilon(r_n, n)$ and $\varepsilon'(r_n, n)$ under both orderings (i.e., $r_n = R_n$ or $R'_n$) are of the same order after some proper scaling of ``unit noise'', because they involve estimation for the same number of parameters. In the following discussion, we assume $\varepsilon(r_n, n)/[A(r_n) + \tau^2] \asymp \varepsilon'(r_n, n)/[A'(r_n) + \tau^2]$, which reduces to $\varepsilon(r_n, n) \asymp \varepsilon'(r_n, n)$ when $\tau^2 > 0$. As shown later, this assumption is motivated by the linear regression and tree regression models. Then, a sufficient condition for the new order to achieve the optimal rate under the original ordering is that $A’(R_n) = O(A(R_n))$. This condition should be intuitive because all it requires is that the new ordering does not delay the inclusion of covariates which are considered important by the original ordering. Suppose now that every covariate matters, in the sense that the resolution bias at any finite resolution is positive, regardless of the ordering of covariates. From Section \ref{sec:rate_optimal_resolution}, for any ordering of covariates, its optimal primary resolution must go to infinity as $n\rightarrow \infty$; that is, we exclude the hard-thresholding case (which is too ideal for the kind of individualized learning we address in this article). To measure the difference between $A(\cdot)$ and $A'(\cdot)$, we introduce $M_r(A, A')$ to denote the minimum non-negative integer such that the first $r-M_r(A, A')+1$ covariates in ordering $A(\cdot)$ is ranked among the first $r+1$ positions in ordering $A'(\cdot)$, i.e., variables $\{X_0, \ldots, X_{r - M_r(A, A')}\}$ are included in $\{X'_0, \ldots, X'_r\}$. Note that $M_r(A, A') \le r$ because we can assume $X'_0= X_0$ since they both denote the constant term. It is asymmetric in $A$ and $A'$, and the farther $M_r(A, A')$ is away from zero, the more different $A$ and $A'$ will be. That is, $M_r(A, A')$ is the number of mistakes we make in choosing the first $r+1$ covariates with respect to the original ordering $A(\cdot)$. The following theorem tells us how many mistakes are acceptable, asymptotically. \begin{theorem}\label{th:ordering} Assume that (a) the family of prediction functions becomes richer as resolution increases, and is invariant to the permutation of the covariates at each resolution; (b) the estimation error rate is invariant to the ordering: $\varepsilon(r_n, n)/[A(r_n)+\tau^2] \asymp \varepsilon'(r_n, n)/[A'(r_n)+\tau^2]$. Then a sufficient condition for $A'(R_n) = O(A(R_n))$ under each decay scenario (as underlined and where $\xi>0$) is given below. \begin{itemize} \item[(i)] \underline{Exponential Decay: $A(r) \asymp e^{-\xi r}$}:\quad $\limsup_{r \rightarrow \infty}$ $M_r(A, A') \leq Constant$. \item[(ii)] \underline{Polynomial Decay: $A(r) \asymp r^{-\xi}$ }:\quad $\limsup_{r \rightarrow \infty}$ $M_r(A, A')/r <1$. \item[(iii)] \underline{Logarithmic Decay: $A(r) \asymp \log^{-\xi}(r)$}: \quad $M_r(A, A') = r - r^{1/a_r}$ with $a_r = O(1)$. \end{itemize} \end{theorem} The qualitative message of Theorem~\ref{th:ordering} is rather intuitive. The fewer of the important predictors that exist, the surer we need to include them in our prediction model. Although we still need to obtain the necessary conditions, the quantitative messages here can be taken as theoretical guidelines. With exponential decay, the number of forgivable mistakes is very limited, and it cannot be permitted to grow with the resolution level. Under polynomial decay, which still includes the practically learn-able case when the estimation error is also polynomial in resolution $r$, we can permit the number of mistakes to increase linearly with $r$ (but of course less rapidly than the growth rate of $r$). This learn-able case is perhaps the practically most important scenario, since polynomial decay and polynomial estimation error are the kind of cases that we hope to encounter in practice. Exponential decay is likely too much for which to hope in many practical situations, and logarithmic decay is hopeless in terms of individualized learning, as seen in Table~\ref{tab:optimal_rate} and Table~\ref{tab:optimal_rate_zero}. The result in Theorem \ref{th:ordering} with logarithmic decay indicates that we can be almost entirely wrong in our ordering but still maintain the optimal rate. This seemingly too-good-to-be-true result indeed is a negative one, because it is made possible by the fact that there is really not much information in the predictors, so whatever orders one uses will not improve the situation. \section{Prediction with Infinitely Many Continuous Predictors }\label{sec:theory_linear} \subsection{Normal linear models with infinitely many continuous covariates}\label{sec:linear_model} Consider the simple linear regression model with infinitely many covariates, which we assume to hold for both the target and training populations: \begin{align}\label{eq:linear_model} Y & = \bm{\beta}_{\infty}^\top \vec{\bm{X}}_{\infty} + \eta \equiv \sum_{r=0}^{\infty} \beta_r X_r + \eta, \quad \eta \sim \mathcal{N}(0, \sigma^2_\eta), \quad \eta \ind \vec{\bm{X}}_{\infty}, \nonumber \\ & {\rm where } \ X_0 = 1, \ \ \{X_1, X_2, \ldots\} \text{ are jointly normally distributed}. \end{align} Clearly for $\mathbb{V}(Y) < \infty$, always the case in practice, there will be restrictions on $\beta_r$'s. Here we choose the loss function to be $\mathcal{L}_{\odot}(y, \hat{y}) = \mathcal{L} (y, \hat{y}) = (y - \hat{y})^2$, and the prediction function at resolution $r$ to be linear in the first $r+1$ covariates, i.e., $g(\vec{\bm{x}}_{r}, \bm{\theta}_r)=\bm{\theta}_r^\top \vec{\bm{x}}_{r}.$ Under this setting, the optimal prediction function is $g(\vec{\bm{x}}_{r}, \bm{\theta}_r^*) = \mathbb{E}(Y \mid \vec{\bm{X}}_{r} = \vec{\bm{x}}_{r})$. The estimator $\hat{\bm{\theta}}_r$ for the true $\bm{\theta}_r^*$ using empirical risk minimization is the least-squares estimator based on the first $r+1$ covariates in the training set $\mathcal{T}_n$. Thus, our prediction for a unit with covariates $\bm{x}_{\infty}$ using primary resolution $r$ is $g(\vec{\bm{x}}_{r}, \hat{\bm{\theta}}_r)=\hat{\bm{\theta}}_r^\top \vec{\bm{x}}_{r}$. Now we investigate the prediction error at a specific resolution $r$ and in particular its decomposition as in Section \ref{sec:decomposition}. First, because we consider square loss and specify the prediction function perfectly, the ultimate risk $\tau^2= \sigma^2_\infty\equiv \mathbb{V}(Y\mid\vec{\bm{X}}_{\infty})=\sigma^2_\eta$. Note here because of the additivity of the error term $\eta$ in \eqref{eq:linear_model}, $\sigma^2_\infty$ is a constant. In general, $\tau^2$ and $\sigma_\infty^2$ are different. In the following we will use $\tau^2=0$ to indicate the world without variance. Second, define $\delta_k^2 \equiv \mathbb{V}(Y\mid \vec{\bm{X}}_{k-1}) - \mathbb{V}(Y \mid \vec{\bm{X}}_{k})$ as the variance of the response explained by the $k$th covariates in excess to that by the previous ones. Then $A(r) = \sum_{k=r+1}^\infty \delta_k^2$. Third, the estimation error is $ \varepsilon(r, \mathcal{T}_n) = (\hat{\bm{\theta}}_r-\bm{\theta}^*_r)^\top \mathbb{E}( \vec{\bm{X}}_{r} \vec{\bm{X}}_{r}^\top ) (\hat{\bm{\theta}}_r-\bm{\theta}^*_r), $ and its expectation over all training sets of size $n$ is (see the Appendices) \begin{align}\label{eq:epsilon_r_n_linear} \varepsilon(r,n) = \mathbb{E}_n \left[\varepsilon(r, \mathcal{T}_n)\right] = \frac{A(r)+\tau^2}{n-r-2}\left( \frac{n-2}{n}+r \right). \end{align} Consequently, the average prediction error in \eqref{eq:decomposition_average_training} at resolution $r$ is \begin{align}\label{eq:pred_loss_linear} \mathbb{E}_n\left\{\mathbb{E}[Y - g( \vec{\bm{X}}_{r}, \hat{\bm{\theta}}_r)]^2 \right\} & = \tau^2 + \sum_{k=r+1}^\infty \delta_k^2 + \mathbb{E}_n\left[ (\hat{\bm{\theta}}_r-\bm{\theta}^*_r)^\top \mathbb{E}( \vec{\bm{X}}_{r} \vec{\bm{X}}_{r}^\top ) (\hat{\bm{\theta}}_r-\bm{\theta}^*_r) \right] \nonumber \\ & = \left[ \tau^2 + A(r) \right] \cdot \frac{(n+1)(n-2)}{n(n-r-2)}. \end{align} The prediction error under linear models is also reported in \citet{hastie2019surprises}, where the authors studied ridgeless regression in the growing-$p$-\&-$n$ setting, with $p/n$ assumed to converge to a limit $\gamma$. Like most articles in the large-$p$-small-$n$ literature, they assumed the residual variance, in our notation $A(p)+\tau^2$, is free of $p$. Under such an assumption, we see from (\ref{eq:pred_loss_linear}) (after replacing $r$ by $p$), that for any value of $\tau^2>0$, the prediction error always explodes when $\gamma = p/n$ approaches 1, yielding the turning point for the ``double descent" phenomenon that we will discuss in Section~\ref{sec:double_descent}. However, under our MR framework, it is clear that as the number of predictors increases, the variance unexplained, that is, the residual variance will decrease in general. Hence it makes little statistical sense to assume $A(r)$ will stay as a constant as $r$ changes -- if this were the case, what would be the point of including more predictors? By explicitly considering the behavior of the unexplained variance as number of predictors increases, the prediction error can have very different characteristics under different scenarios. In particular, it is quite clear from \eqref{eq:pred_loss_linear} that when $\tau^2=0$, the prediction error may not explode when $r/n$ approaches one, because $A(r)$ is approaching zero as well, creating a limit of the form $0/0$, whose value will depend on the rate at which $A(r)$ approaches zero. We will investigate this issue shortly in Section \ref{sec:linear_zero_tau} when $\tau^2=0$, where we reveal the phenomenon for the optimal resolution $R$ to be as close to $n$ as possible, traditionally considered impossible because it is in the region of (nearly) over-fitting. \subsection{General results motivated and illustrated by linear regression}\label{sec:gen_result_linear} Under the linear model \eqref{eq:linear_model}, when the intrinsic variance is positive, i.e., $\tau^2 > 0$, we can show that for any sequence of resolution levels $\{r_n\}$, a necessary condition for $\varepsilon(r_n,n)=o(1)$ is $\lim_{n\rightarrow \infty} r_n/n=0$. Moreover, under this condition, $\varepsilon(r_n,n)\asymp r_n/n$; see the Appendices for a proof. More generally, we expect that $\varepsilon(r_n, n) \asymp \dim(\bm{\theta}_r)/n$ holds for continuous predictors under regularity conditions. In general cases with continuous covariates, typically $\dim (\bm{\theta}_r)\asymp r^{\alpha}$ for some $\alpha>0$. The following theorem considers an assumption involving $\varepsilon(r, n) \asymp \dim(\bm{\theta}_r)/n \asymp r^\alpha/n$. That is, the linear model motivates us to consider this assumption of polynomial estimation error rate in resolution, but the result below is not restricted to the linear model. All proofs are given in the Appendices. \begin{theorem}\label{th:cont} Let $R_n$ be a rate-optimal resolution, and $L_n=A(R_n) + \varepsilon(R_n, n)$ be the corresponding minimal prediction error (after removing the ultimate risk). Then we have the following asymptotic results under each condition on the decay rate of $A(r)$ (as underlined), but all assume \textit{polynomial estimation error}, that is, $\varepsilon(r, n)\asymp r^{\alpha}/n$, where $\alpha>0$. (As in Theorem~\ref{th:ordering}, all $\xi>0$.) \begin{itemize} \item[(i)] \underline{Hard Thresholding: $A(r)=0$ for $r\geq r_0$, and $A(r)>0$ for $r< r_0$.} Then $R_n \asymp 1$ with the constraint that $\liminf_{n\rightarrow \infty} R_n\geq r_0$; and $L_n \asymp n^{-1}$. \smallskip \item[(ii)] \underline{Exponential Decay: $A(r) \asymp e^{-\xi r}$.} Then $R_n = a_n \log(n)$ with $a_n$ satisfying $a_n \asymp 1$ and $n^{1-\xi a_n}\log^{-\alpha}(n) = O(1);$ and $L_n\asymp n^{-1}\log^{\alpha}(n)$. \smallskip \item[(iii)] \underline{Polynomial Decay: $A(r) \asymp r^{-\xi}$.} Then $R_n\asymp n^{1/(\alpha+\xi)}$; and $L_n \asymp n^{-\xi/(\alpha+\xi)}$. \smallskip \item[(iv)] \underline{Logarithmic Decay: $A(r) \asymp \log^{-\xi}(r)$.} Then $R_n=a_n n^{1/\alpha}\log^{-\xi/\alpha}(n)$ with $a_n$ satisfying $a_n = O(1)$ and $\liminf_{n\rightarrow \infty} \left[\log(a_n)/\log(n)\right] >- \alpha^{-1};$ and $L_n \asymp \log^{-\xi}(n)$. \end{itemize} \end{theorem} This result provides precise descriptions of various restrictions on the deterministic sequence $c_n$ in the first row of Table~\ref{tab:optimal_rate}, although their details are mostly secondary to the theoretical and practical insights discussed in Section~\ref{sec:rate_optimal_resolution}. Moreover, Theorem \ref{th:cont}, as well as the later theorems, relies only on the rates of $A(r)$ and $\varepsilon(r,n)$, and thus can be applied to general sieves with the same rates of $A(r)$ and $\varepsilon(r,n)$. We remark that in the derivations above we can replace the expected error $\varepsilon(r, n)$ by $\varepsilon(r, \mathcal{T}_n)$, which depends on the actual training set, as in \eqref{eq:decomposition}. That is, we can seek resolution levels $\{r_n\}$ such that $A(r_n) + \varepsilon(r_n, \mathcal{T}_n)$ converges to zero in probability in the fastest way. The results remain the same if we replace $``\asymp"$ by $``\overset{\mathbb{P}}{\asymp}"$. Indeed, for the linear model \eqref{eq:linear_model} with positive $\tau^2$ we show in the Appendices that (a) for any resolution $\{r_n\}$, $r_n/n = o(1)$ is necessary for the actual estimation error $\varepsilon(r, \mathcal{T}_n)$ to be $o_{\mathbb{P}}(1)$, and (b) when $r_n/n = o(1)$, $\varepsilon(r, \mathcal{T}_n) \overset{\mathbb{P}}{\asymp} r_n/n$. Therefore, Theorem \ref{th:cont} applies with $\alpha = 1$ and $``\asymp"$ replaced by $``\overset{\mathbb{P}}{\asymp}"$. \subsection{Specific results for linear regression without variance}\label{sec:linear_zero_tau} When $\tau^2=0$, however, we are entering a rather different world. Under model \eqref{eq:linear_model} with zero $\sigma^2_\eta(=\tau^2)$, the response $Y$ is (almost surely) a deterministic function of the countably many covariates. This is not merely a philosophical contemplation, but a mathematical reality. Indeed, any random variable can be obtained deterministically from a set of uniform variables on the unit interval, and any such uniform variable admits the binary expansion $\sum_{i=1}^\infty 2^{-i} U_i$, where $\{U_i, i\ge 1\}$ are i.i.d. Bernoulli(1/2); see \cite{doi:10.1080/01621459.2018.1537921} for an investigation of using this deterministic expansion to study statistical independence. Of course, empirically it is impossible to test whether $\tau^2=0$. Hence one would expect or at least hope that it is inconsequential for practical purposes to set $\tau^2=0$ or not, as alluded to in \cite{meng2014trio}. Therefore we were surprised initially when we saw the critical dependence of our asymptotic results on whether $\tau^2=0$ or not. When $\tau^2=0$, the asymptotic error $\varepsilon(r,n)$ is no longer dominated by the usual $r/n$ order, but by $A(r)$ itself, as discussed previously. Specifically, contrasting with the case where $\tau^2>0$, $r/n = o(1)$ is no longer a necessary requirement for $\varepsilon(r,n)$ to converge to zero, because $A(r)$ can drive the error to zero even if $r/n\rightarrow 1$, as seen in \eqref{eq:pred_loss_linear}. This fact leads to different results from Theorem~\ref{th:cont}, as summarized below. We emphasize that the following theorem, although focuses on the linear model, also holds for cases where the estimation error following the same rate as that in \eqref{eq:pred_loss_linear}. \begin{theorem}\label{th:contzero} Under model \eqref{eq:linear_model} with $\tau^2=0$ and $L^2$ loss, the rate-optimal resolution $R_n$ and the corresponding minimal prediction error $L_n = A(R_n) + \varepsilon(R_n, n)$ have the following forms under each condition on the decay rate of $A(r)$, where all $\xi>0$. \begin{itemize} \item[(i)] \underline{Hard Thresholding: $A(r)=0$ for $r\geq r_0$, and $A(r)>0$ for $r< r_0$.} The optimal resolution is any $R_n$ such that $\liminf_{n\rightarrow \infty} R_n\geq r_0$ and $R_n \le n-3$; and $L_n = 0$ for sufficiently large $n$. \item[(ii)] \underline{Exponential Decay: $A(r) \asymp e^{-\xi r}$.} $R_n = n-O(1)$ with $R_n \le n-3;$ and $L_n\asymp n e^{-\xi n}$. \item[(iii)] \underline{Polynomial Decay: $A(r) \asymp r^{-\xi}$.} $R_n = a_n n $ with $a_n$ satisfying $a_n \asymp 1$ and $\limsup a_n < 1$; and $L_n \asymp n^{-\xi}$. \item[(iv)] \underline{Logarithmic Decay: $A(r) \asymp \log^{-\xi}(r)$.} Optimal resolution is any $R_n$ such that\\ $\limsup R_n/n < 1$, $\liminf \frac{\log R_n}{\log n} > 0;$ and $L_n \asymp \log^{-\xi}(n)$. \end{itemize} \end{theorem} The most unexpected finding here is that, unlike the case with $\tau^2>0$ where no optimal $R_n$ approaches over-fitting, that is, having $R_n$ close to $n$, all four cases here permit or even require $R_n$ to be the same order as $n$. When $A(r)$ has a hard threshold or decays exponentially, we can even allow $R_n = n-3$, almost the largest resolution level by which we can fit an ordinary least squares given sample size $n$ (recall we have $r+1$ unknown parameters at resolution $r$). When $A(r)$ decays polynomially or logarithmically, we can choose $R_n = cn$ for some constant $c\in (0,1)$. That is, the usual concerns with over-fitting disappear. Another unexpected finding is that the logarithmic case permits $R_n \asymp n^{k}$ for $k \in (0,1)$, which is smaller than the polynomial case, against our intuition that slower decay should require a larger number of covariates. However, this does not contradict Theorem~\ref{th:cont}, which applies only to cases with $\tau^2>0$. These unexpected theoretical results compel us to think harder about our intuitions built from the results in Section \ref{sec:gen_result_linear}, which are consequences from the principle of bias-variance trade-off. Does the principle fail here, as some declared about the ``double descents" phenomena in machine learning, which apparently can also prefer over-fitted models \citep[e.g.,][]{belkin2019reconciling,hastie2019surprises,nakkiran2019deep}? Whereas more research is needed to understand the deterministic regime as identified by Theorem~\ref{th:contzero}, our current understanding is that the bias-variance trade-off is sound and well. In a world with zero variance, the optimal trade-off should place all its bets on the bias term. In a deterministic world, the more mathematical constraints imposed for solving a set of equations, the smaller is the set of potential solutions. Without any variance, any specific individual case is a hard mathematical constraint for reconstructing the deterministic relationship between the outcome and the predictors. It is not surprising therefore---retrospectively---that the mathematics is instructing us to use as higher resolution as possible, except for saving some degrees of freedom to take care of the ``pseudo-variance" caused by $A(r)$, when it does not decay sufficiently rapidly. Attempting to understand this preference for over-fitting by the deterministic setup, we realize that the ``double descents" phenomenon may not be due to over-fitting as currently depicted, or at least it can also occur within the ``under-fitting" region. In the current literature, ``double descents" refers to the phenomenon that as $p$ increases, the prediction error or risk first decreases due to the bias reduction, and then increases due to the inflated variance. However, as $p$ exceeds (effective) data size, the prediction error decreases again, i.e, it exhibits a double descent phenomenon. Many researchers have tried to understand this phenomenon, and most of the studies attribute it to over-parameterization and that the fitted model tends to be the smoothest one interpolating all training samples; see, e.g., \citet{belkin2019reconciling,hastie2019surprises}. The section below demonstrates that double and indeed multiple descents can occur without over-parameterization. This fact suggests that the issue of ordering covariates discussed in Section \ref{sec:ordering} is an intrinsic one, and that the reasons for the double descents phenomena in machine learning might be more nuanced than over-parametrization. \subsection{No surprises: Double and multiple descent phenomenon}\label{sec:double_descent} We first consider a setting which demonstrates a double descent phenomena within the under-fitting region. We assume that the resolution bias has the following form: \begin{align}\label{eq:approx_error_double} A(r) = \begin{cases} r^{-1}, & \text{if } r \le \underline{r}, \\ \frac{1 + \exp(\underline{r} - \overline{r})}{\underline{r}} \cdot \frac{1}{1 + \exp(r - \overline{r})}, & \text{if } r > \underline{r}, \end{cases} \end{align} where $\underline{r} \le \overline{r}$ are two positive integers, and the coefficient $\{ 1 + \exp(\underline{r} - \overline{r}) \}/\underline{r}$ for $r>\underline{r}$ is chosen such that $A(r)$ is a continuous function of $r$. Figure \ref{fig:double_descent}(a) plots the resolution bias against the resolution when $\underline{r} = 30$ and $\overline{r} = 60$. Figure \ref{fig:double_descent}(b) shows the average prediction loss \eqref{eq:pred_loss_linear} when $\tau^2=0$ and $n = 100$, which clearly demonstrates a ``double-descent'' phenomenon. Comparing Figures \ref{fig:double_descent}(a) and (b), we can see that the double-descent pattern of the prediction error is driven by the varying importance of the added covariates. That is, when we add covariates with little predictive power, we are essentially adding noise to our prediction and hence increase the predictive error, until we add more powerful covariates to (again) bring the error down. \begin{figure}[ht] \centering \begin{subfigure}{.39\textwidth} \centering \includegraphics[width=1\linewidth]{plots/resolution_bias_double_0215_Oct20.pdf} \caption{\centering Resolution bias} \end{subfigure}% \begin{subfigure}{.39\textwidth} \centering \includegraphics[width=1\linewidth]{plots/prediction_loss_double_0215_Oct20.pdf} \caption{Prediction error} \end{subfigure} \caption{ Figures plotting the resolution bias in \eqref{eq:approx_error_double}, as well as the corresponding prediction error with $\tau^2=0$, against the resolution $r$. }\label{fig:double_descent} \end{figure} With this insight, it is easy to demonstrate multiple-descent phenomenon for as many descents as we want. For example, we can take \begin{align}\label{eq:approx_error_multiple} A(r) = \begin{cases} \mathbbm{1}\{r \le \underline{r}_1\} \cdot r^{-1} + \mathbbm{1}\{r > \underline{r}_1\} \cdot \frac{1 + \exp(\underline{r}_1 - \overline{r}_1)}{\underline{r}_1} \cdot \frac{1}{1 + \exp(r - \overline{r}_1)}, & \text{if } r \le \overline{r}_1, \\ c_2 \mathbbm{1}\{r \le \underline{r}_2\} \cdot r^{-1} + c_2 \mathbbm{1}\{r > \underline{r}_2\} \cdot \frac{1 + \exp(\underline{r}_2 - \overline{r}_2)}{\underline{r}_2} \cdot \frac{1}{1 + \exp(r - \overline{r}_2)}, & \text{if } \overline{r}_1 < r \le \overline{r}_2, \\ c_3 \mathbbm{1}\{r \le \underline{r}_3\} \cdot r^{-1} + c_3 \mathbbm{1}\{r > \underline{r}_3\} \cdot \frac{1 + \exp(\underline{r}_3 - \overline{r}_3)}{\underline{r}_3} \cdot \frac{1}{1 + \exp(r - \overline{r}_3)}, & \text{if } \overline{r}_2 < r \le \overline{r}_3,\\ \ \ \ldots \end{cases} \end{align} where $\underline{r}_1 \le \overline{r}_1 \le \underline{r}_2 \le \overline{r}_2 \le \underline{r}_3 \le \overline{r}_3 \le \ldots$ and $c_k$'s are chosen such that $A(r)$ is a continuous function of $r$. Figure \ref{fig:multiple_descent}(a) plots the resolution bias $A(r)$ against the resolution $r$ when $\overline{r}_k = \underline{r}_k + 30 = 60k$ for $k\ge 1$. From Figure \ref{fig:multiple_descent}(a), we can see that, as $r$ increases, the resolution bias keeps repeating the pattern in Figure \ref{fig:double_descent}(a), i.e., the importance of added covariates keeps fluctuating. Figure \ref{fig:multiple_descent}(b) plots the logarithm of the average prediction error in \eqref{eq:pred_loss_linear} against the resolution when the sample size $n=300$ and the intrinsic error $\tau^2=0$. Clearly, Figure \ref{fig:multiple_descent}(b) exhibits a multiple-descent phenomenon. However, in contrast to Figure \ref{fig:double_descent}(b), the prediction error does not die down in the end. This is because the resolution bias in Figure \ref{fig:double_descent}(a) decays exponentially, while that in Figure \ref{fig:multiple_descent}(a) interweaves between exponential and polynomial decays, not covered by our theorems. \begin{figure}[ht] \centering \begin{subfigure}{.39\textwidth} \centering \includegraphics[width=1\linewidth]{plots/resolution_bias_multiple_0215_Oct20.pdf} \caption{\centering Resolution bias} \end{subfigure}% \begin{subfigure}{.39\textwidth} \centering \includegraphics[width=1\linewidth]{plots/log_prediction_loss_multiple_0215_Oct20.pdf} \caption{Prediction error} \end{subfigure} \caption{ Figures plotting the resolution bias in \eqref{eq:approx_error_multiple}, as well as the corresponding prediction error (with $\tau^2=0$), against the resolution $r$. }\label{fig:multiple_descent} \end{figure} From the above discussion, it is not difficult to see that double or multiple descent phenomena are driven by the varying decay of resolution bias and inflation of the estimation error. Depending on which of these two terms is dominating, the prediction error can either decrease or increase, and can thus exhibit multiple descent patterns. {\color{black} A reviewer points out that the multiple descent phenomenon can also occur when most of the covariates are irrelevant and the relevant ones appear sporadically.} Such phenomena are also not restricted to regression settings. For example, in the midst of revising this article, we learned about \citet{liang2020multiple}, which demonstrated multiple descent phenomena in kernel machines and neural networks. We remark that, for any monotonically decreasing function $A(r)$, we can construct a linear model with $A(r)$ as its decay rate, so all the examples above are realizable. Let $X_0 = 1$, $\{X_1, X_2, X_3, \ldots\}$ be i.i.d standard normal random variables, and $\eta \sim \mathcal{N}(0, \sigma^2_{\eta})$. Define $\beta_0$ to be any constant, and $\beta_r = \sqrt{A(r-1) - A(r)},$ for any $r \ge 1.$ Then the corresponding linear model \eqref{eq:linear_model} has the desired resolution bias $A(r)$. We will use this construction in the following simulation study. \begin{figure}[ht] \centering \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=1\linewidth]{plots/AvePred_tau2_0_5_exponential_n50} \caption{Exponential, $\tau^2=\frac{1}{2}$} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=1\linewidth]{plots/AvePred_tau2_0_5_polynomial_n50} \caption{Polynomial, $\tau^2=\frac{1}{2}$} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=1\linewidth]{plots/AvePred_tau2_0_5_logarithm_n50} \caption{Logarithmic,$\tau^2=\frac{1}{2}$} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=1\linewidth]{plots/no_intrinsic_AvePred_exponential_n50} \caption{\centering Exponential, $\tau^2 = 0$} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=1\linewidth]{plots/no_intrinsic_AvePred_polynomial_n50} \caption{Polynomial, $\tau^2 = 0$} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=1\linewidth]{plots/no_intrinsic_AvePred_logarithm_n50} \caption{Logarithmic, $\tau^2 = 0$} \end{subfigure} \caption{ The performance of three strategies CV, UE and IC for estimating prediction error when $n=50$ and respectively with $\tau^2 = 0.5$ (top row) and $\tau^2 = 0$ (bottom row) . The $x$-axis denotes resolution level $r$, and the $y$-axis denotes the logarithm of the true and estimated average prediction error over 500 simulated training sets. The resolution biases follow the decay rates of $e^{-r}$, $r^{-1}$ and $\{\log(r)\}^{-1}$, respectively, for the three scenarios in (a)--(f). } \label{fig:estimate_pred_loss} \end{figure} \subsection{Finite sample performance -- Preliminary findings}\label{sec:finite} Whereas theoretical results are extremely useful for providing deep understanding and revealing new insights, we must be mindful that they may or may not match the empirical findings with finite samples. As a first step towards a comprehensive (and very challenging) study of our MR framework with finite samples, we conducted a simulation study using the normal linear model in Section~\ref{sec:linear_model}. The simplicity of this model allows us to compute the optimal resolution and minimal prediction error exactly for any given $n (\ge 3)$, which can then be used as benchmarks to investigate the performance of various estimators for the optimal resolution. However, the model is still sufficiently rich and realistic to both confirm some of the asymptotic findings, including the resistance to over-fitting in the absence of intrinsic variation, and to reveal complications with finite samples that are not captured by the asymptotic results. Due to space limitations, we report only findings on three ways of estimating prediction error curves in finite samples as functions of the resolution $r$, which then can be minimized for estimating optimal resolution. The three methods are based on cross validation (CV), an unbiased estimator (UE), and an information criteria (IC); see Appendix~\ref{sec:practical} for details and all other findings. Figure~\ref{fig:estimate_pred_loss} plots the logarithm of averages of the three estimators over 500 Monte Carlo replications against the resolution level $r$, under different choices of the decay rate $A(r)$ and intrinsic variance $\tau^2$, all with $n=50$. We see that UE worked well by being unbiased, CV performed well except when venturing into the over-fitting region, and IC failed badly other than when $r$ is small. The only exception is when there is no bias-variance trade-off, as depicted in plot (d), where the optimal resolution reaches the sample size, in which case the gross over-fitting tendency of IC brings benefit instead of damage. All six curve shapes are consistent with the theoretical findings in Theorem~\ref{th:cont} (for $\tau^2>0$) and in Theorem~\ref{th:contzero} (for $\tau^2=0$). \section{Predictions with Infinitely Many Categorical Predictors}\label{sec:theory_tree} \subsection{Regression tree models with infinitely many categorical covariates}\label{sec:regression_tree} We now introduce regression tree models with infinitely many categorical covariates, and then use them to illustrate some general results on rate optimal resolution and prediction. Specifically, we assume both target and training populations satisfy \begin{align}\label{eq:binary_model} X_1, X_2, \ldots \text{are i.i.d. with } \mathbb{P}(X_i = k) = M^{-1} \text{ for } k = 1,2, \ldots, M, \quad \mathbb{V}(Y) < \infty, \end{align} and the dependence of $Y$ on $\{X_1, X_2, \ldots\}$ is arbitrary, where $M\ge 2$. That is, \eqref{eq:binary_model} is a regression tree in which each covariate increases the depth of the tree by one, and hence it is a tree of (potentially) infinite depth. The loss function is again the square loss: $\mathcal{L}_{\odot}(y, \hat{y}) = \mathcal{L} (y, \hat{y}) = (y - \hat{y})^2$, and the prediction function at resolution $r$ is fully saturated, that is, it can have different values for different covariates up to resolution $r$, $$g(\vec{\bm{x}}_{r}, \bm{\theta}_r) = \sum_{\vec{\bm{a}}_{r}\in \{1, 2, \ldots, M\}^{r+1}} \mathbbm{1} (\vec{\bm{x}}_{r}=\vec{\bm{a}}_{r}) \bm{\theta}_{r}(\vec{\bm{a}}_{r}), $$ where the summation is essentially over $M^r$ terms because $X_0\equiv 1$, $\dim(\bm{\theta}_r) = M^r$ and $\bm{\theta}_r(\vec{\bm{a}}_{r})$ denotes the coordinate corresponding to covariate value $\vec{\bm{a}}_{r}$. Given a training set $\mathcal{T}_n$, for each resolution $r$, we use $n(\vec{\bm{x}}_{r})$ to denote the number of units with covariate value $\vec{\bm{x}}_{r}$. When $n(\vec{\bm{x}}_{r}) >0$, minimizing the empirical risk will lead to taking the sample average of the outcome of these $n(\vec{\bm{x}}_{r})$ individuals. The matter is more complicated when $n(\vec{\bm{x}}_{r}) = 0$. Here we adopt the ``highest-resolution imputation". That is, for each individual of interest, we find training samples that have the same covariates up to a resolution that is as large as possible but is truncated at $r$, and then use their average response as a prediction for this individual. Note that this estimator is unique conditioning on the given order of the predictors. Consequently, our estimator for the parameter $\bm{\theta}_{r}$ has the following form: \begin{align}\label{eq:theta_binary} \hat{\bm{\theta}}_{r}(\vec{\bm{x}}_{r}) & = \begin{cases} \frac{1}{n(\vec{\bm{x}}_{r})} \sum_{i: \vec{\bm{x}}_{ir}=\vec{\bm{x}}_{r}} Y_i, & \text{if } n(\vec{\bm{x}}_{r}) >0, \\ \frac{1}{n(\vec{\bm{x}}_{k})} \sum_{i: \vec{\bm{x}}_{ik}=\vec{\bm{x}}_{k}} Y_i, & \text{if } n(\vec{\bm{x}}_{k})>0 \text{ and } n(\vec{\bm{x}}_{k+1})=0, \text{ for } 0 \le k < r. \end{cases} \end{align} This estimator is always well-defined, because $n(\vec{\bm{x}}_{0})=n>0$. Under model \eqref{eq:binary_model}, we can derive that (i) the ultimate risk is $\tau^2 =\mathbb{E}[ \mathbb{V}(Y \mid \vec{\bm{X}}_\infty)]$, (ii) the resolution bias is \begin{align*} A(r) = \sum_{k=r+1}^{\infty} \left\{ \mathbb{E}[ \mathbb{V}(Y \mid \vec{\bm{X}}_{k-1}) ] - \mathbb{E}[ \mathbb{V}(Y \mid \vec{\bm{X}}_{k}) ]\right\}, \end{align*} and (iii) the estimation error is $\varepsilon(r,\mathcal{T}_n) = \mathbb{E}[ \hat{\bm{\theta}}_{r}(\vec{\bm{X}}_{r}) - \mathbb{E}(Y\mid \vec{\bm{X}}_{r}) ]^2.$ The expectation of $\varepsilon(r,\mathcal{T}_n)$ over the training sets has three terms, as indicated and simplified below: \begin{align}\label{eq:epsilon_r_n_binary} \varepsilon(r,n) & = \left[ A(r) + \tau^2 \right] \cdot \mathbb{E}_n\left[ \frac{\mathbbm{1}( n(\vec{\bm{1}}_{r}) > 0 )}{n(\vec{\bm{1}}_{r})} \right]\hskip 1.6in ({\rm Var\ when\ } n(\vec{\bm{1}}_{r})>0) \nonumber \\ & \quad \ + \sum_{k=0}^{r-1} \left[A(k) + \tau^2 \right] \cdot \mathbb{E}_n\left[ \frac{\mathbbm{1}(n(\vec{\bm{1}}_{k}) > 0, n(\vec{\bm{1}}_{k+1}) = 0)}{n(\vec{\bm{1}}_{k})} \right] \qquad \quad ({\rm Var\ when\ } n(\vec{\bm{1}}_{r})=0) \nonumber \\ & \quad \ + \sum_{k=0}^{r-1} \left[ A(k) - A(r) \right] \cdot \mathbb{E}_n\left[ \mathbbm{1}(n(\vec{\bm{1}}_{k}) > 0, n(\vec{\bm{1}}_{k+1}) = 0) \right]\qquad \ \ \ ({\rm Bias\ when\ } n(\vec{\bm{1}}_{r})=0) \nonumber \\ & = \mathbb{E}_n \left[ \frac{A(\mathcal{K} \wedge r) + \tau^2 }{n\left( \vec{\bm{1}}_{\mathcal{K}\wedge r} \right) } \right] + \sum_{k=0}^{r-1} [A(k) - A(r)] \cdot \mathbb{E}_n\left[ \mathbbm{1}(n(\vec{\bm{1}}_{k}) > 0, n(\vec{\bm{1}}_{k+1}) = 0) \right], \end{align} where $n(\vec{\bm{1}}_{k})$ denotes the number of training samples with covariate value $\vec{\bm{x}}_{ik} = \vec{\bm{1}}_{k}$, $\mathcal{K}$ is the maximum integer $k$ such that $n(\vec{\bm{1}}_{k}) > 0$, and $\mathcal{K} \wedge r = \min\{ \mathcal{K}, r \}$. Note that here $n(\vec{\bm{1}}_{k}) \sim \text{Binomial}(n, M^{-k})$ and $n(\vec{\bm{1}}_{k+1}) \mid n(\vec{\bm{1}}_{k}) \sim \text{Binomial}(n(\vec{\bm{1}}_{k}), M^{-1})$ for any $k\ge 0$. We stress that it is the assumption that all $\vec{\bm{x}}_{k}$'s are uniformly distributed that permits us to replace $n(\vec{\bm{x}}_{k})$ by $n(\vec{\bm{1}}_{k})$, which greatly simplifies the derivation; see Appendix~\ref{app:theory_tree} for deriving error decomposition under model \eqref{eq:binary_model}. \subsection{General results inspired and illustrated by regression tree}\label{sec:general_regression_tree} Under \eqref{eq:binary_model}, when $\tau^2 > 0$, we can show that for any sequence $\{r_n\}$, a necessary condition for $\varepsilon(r_n,n) = o(1)$ is that $\lim_{n\rightarrow \infty} M^{r_n}/n \rightarrow 0$. Moreover, under this condition, the convergence rate of $\varepsilon(r,n)$ is $M^r/n$, i.e., $\varepsilon(r,n) \asymp M^r/n \asymp \dim(\bm{\theta}_r)/n$. Again, these intuitive results require some rather technical proofs, given in the Appendices. This inspires us to consider more general cases with categorical covariates in which $\dim (\bm{\theta}_r)\asymp \alpha^r$ for some $\alpha>1$; for example, $\alpha=2$ if the covariates are all binary, and the prediction function $g(\vec{\bm{x}}_{r}, \bm{\theta}_r)$ can have different values for each of the $2^r$ possible values of $\vec{\bm{x}}_{r}$. This contrasts with the previous case featuring continuous covariates in which the dimension of parameters increases polynomially with the resolution. The following theorem is the counterpart of Theorem~\ref{th:cont} under the exponential estimation error. \begin{theorem}\label{th:disc} Same notation and setup as in Theorem~\ref{th:cont}, except that we now assume \textit{exponential estimation error}: $\varepsilon(r,n)\asymp {\alpha}^r/n$, for some $\alpha>1$. As in Theorem~\ref{th:cont}, all $\xi>0$. \begin{itemize} \item[(i)] \underline{Hard Thresholding: $A(r)=0$ for $r\geq r_0$, and $A(r)>0$ for $r< r_0$.} Then $R_n \asymp 1$ with the constraint that $\liminf_{n\rightarrow \infty} R_n\geq r_0$, and $L_n \asymp n^{-1}$. \smallskip \item[(ii)] \underline{Exponential Decay: $A(r) \asymp e^{-\xi r}$.} Then $R_n =[\log(n)+\log(a_n)][\log(\alpha)+\xi]^{-1}$ with $a_n\asymp 1$; and $L_n\asymp n^{-\xi/\{\log(\alpha)+\xi\}}$. \smallskip \item[(iii)] \underline{Polynomial Decay: $A(r) \asymp r^{-\xi}$.} Then $R_n = a_n\log(n)$ with $a_n$ satisfying $a_n \asymp 1$ and $n^{a_n \log(\alpha)-1}\log^\xi (n) =O(1); $ and $L_n\asymp \log^{-\xi}(n)$. \smallskip \item[(iv)] \underline{Logarithmic Decay: $A(r) \asymp \log^{-\xi}(r)$.} Then $R_n=a_n \log(n)$ with $a_n$ satisfying $$\liminf_{n \rightarrow \infty} \frac{\log(a_n)}{\log\log(n)} >-1, \quad {\rm and} \quad \frac{[\log\log(n)]^\xi}{n^{1-a_n \log(\alpha)}} = O( 1 ); $$ and $L_n\asymp [\log\log(n)]^{-\xi}$. \end{itemize} \end{theorem} \subsection{Specific results for deterministic regression tree}\label{sec:determin_categorial} Similar to Section \ref{sec:linear_zero_tau}, we consider the case in which the ultimate risk $\tau^2=0$, and we will see again below how this leads to rather different asymptotic behavior. But unlike Section \ref{sec:linear_zero_tau}, even when we restrict ourselves to the regression tree model, the exact asymptotic rate for the estimation error is still difficult to obtain other than when $A(r)$ has a hard-thresholding decay. We therefore adopt a two-step strategy. We first establish an upper bound of the estimation error, yielding a corresponding upper bound for the prediction error, which can then be optimized to obtain the minimal upper-bound rate. We then prove that these minimal upper-bound rates are also the maximal lower-bound rates, except for a couple of cases where our proof fails, and hence whether the upper-bound rates are optimal or sharp is still an open problem. Specifically, as proved in the Appendices, the estimation error can be bounded by \begin{align*} \varepsilon(r,n) \le \frac{2M}{n} \sum_{k=0}^r M^k A(r) \equiv \overline{\varepsilon}(r, n). \end{align*} Furthermore, $\overline{\varepsilon}(r, n)$ under varying decay rates for $A(r)$ has the following form: \begin{align}\label{eq:upper_bound_rate} \overline{\varepsilon}(r, n) \asymp \begin{cases} n^{-1}, & \text{ if } A(r) \text{ has a hard threshold or } A(r) \asymp e^{-\xi r} \text{ with } \xi>\log(M),\\ \frac{r}{n}, & \text{ if } A(r) \asymp e^{-\xi r} \text{ with } \xi=\log(M), \\ A(r)\frac{ M^r}{n}, & \text{ if } A(r) \asymp e^{-\xi r} \text{ with } \xi<\log(M), A(r) \asymp r^{-\xi} \text{ or } A(r) \asymp \log^{-\xi} (r). \end{cases} \end{align} From \eqref{eq:upper_bound_rate}, compared to $\varepsilon(r, n) \asymp M^r /n$ when $\tau^2 > 0$, we can see that the rate of the estimation error depends also on the resolution bias and converges to zero more quickly; this is similar to the discussion in Section \ref{sec:linear_zero_tau} under the linear model. Moreover, $M^r /n = o(1)$ is no longer necessary for $\varepsilon(r, n) = o(1)$. In particular and somehow surprisingly, when the resolution bias decays exponentially with rates faster than or equal to $M^{-r}$, the estimation error behaves like the usual parameter setting as in Theorem \ref{th:cont} with a fixed number of (or $r$) unknown parameters at resolution $r$, even the model at each resolution $r$ allows potentially $M^r$ unknown parameters. The following theorem summarizes sufficient conditions for the prediction error to achieve certain (upper-bound) rates under varying decay rate of the resolution bias. \begin{theorem}\label{thm:binary_zero_tau} Under the model \eqref{eq:binary_model} with $\tau^2=0$ and $L^2$ loss, let $L_n = A(R_n) + \varepsilon(R_n, n) \le A(R_n) + \overline{\varepsilon}(R_n, n)\equiv \overline{L}_n$. The rate-optimal resolution $R_n$ or $\overline{R}_n$ and the corresponding optimal $L_n$ or $\overline{L}_n$ (respectively) have the following forms under each $A(r)$, where $\xi>0$. \begin{itemize} \item[(i)] \underline{Hard Thresholding: $A(r)=0$ for $r\geq r_0$, and $A(r)>0$ for $r< r_0$.} Then $R_n$ satisfies that $\liminf_{n\rightarrow \infty}R_n \ge r_0$; and $L_n \asymp (1-M^{-r_0})^n$. \smallskip \item[(ii)] \underline{Exponential Decay: $A(r) \asymp e^{-\xi r}$.} \begin{itemize} \item[(a)] If $\xi> \log(M)$, then $\overline{R}_n$ satisfies $n e^{-\xi \overline{R}_n} = O(1)$; and $\overline{L}_n \asymp n^{-1}$. \item[(b)] If $\xi=\log(M)$, then $\overline{R}_n = a_n \log(n)$ with $a_n$ satisfying $a_n \asymp 1$ and \\ $n^{1-a_n \log(M)}/\log(n) = O(1);$ and $\overline{L}_n\asymp n^{-1}\log(n)$. \item[(c)] If $\xi<\log(M)$, then $\overline{R}_n = a_n \log(n)$ with $a_n$ satisfying $n^{a_n\log(M) - 1 } \asymp 1$; and $\overline{L}_n \asymp n^{-\xi / \log(M)}$. \end{itemize} \smallskip \item[(iii)] \underline{Polynomial Decay: $A(r) \asymp r^{-\xi}$.} Then $\overline{R}_n = a_n \log(n) $ with $a_n$ satisfying $ a_n \asymp 1 $ and $ n^{a_n \log(M) - 1 } = O(1); $ and $\overline{L}_n \asymp \log^{-\xi}(n)$. \smallskip \item[(iv)] \underline{Logarithmic Decay: $A(r) \asymp \log^{-\xi}(r)$.} Then $\overline{R}_n = a_n \log(n) $ with $a_n$ satisfying $$\liminf_{n \rightarrow \infty} \frac{\log(a_n)}{\log\log(n)} >-1, \quad {\rm and} \quad n^{a_n \log(M) - 1}= O( 1 ); $$ and $\overline{L}_n \asymp [\log\log(n)]^{-\xi}$. \end{itemize} \end{theorem} Next we prove that the optimal rates for the upper bounds of prediction errors are also precisely the optimal rates for the true prediction errors, except for the exponential decay case with $\xi\ge \log(M)$, where we can only conjecture but not prove that the results also hold. The following theorem summarizes our results, where for completeness, we include the hard thresholding case, even though Theorem \ref{thm:binary_zero_tau} is exact in that case. Specifically, we say $l_n$ is an asymptotic lower bound for the prediction error $A(r)+\varepsilon(r,n)$ and denote it as $A(r)+\varepsilon(r, n) \gtrsim l_n$, if $l_n = O(A(r_n) + \varepsilon(r_n, n))$, for any sequence $\{r_n\}$. \begin{theorem}\label{thm:lower_bound_binary_tau2_zero} Under model \eqref{eq:binary_model} with $\tau^2=0$ and $L^2$ loss, an asymptotic lower bound for $\varepsilon(r,n)+A(r)$ has the following form under each condition on $A(r)$, where $\xi>0$. \begin{itemize} \item[(i)] \underline{Hard Thresholding: $A(r)=0$ for $r\geq r_0$, and $A(r)>0$ for $r< r_0$.} Then $A(r) + \varepsilon(r, n) \gtrsim (1-M^{-r_0})^n$ \smallskip \item[(ii)] \underline{Exponential Decay: $A(r) \asymp e^{-\xi r}$.} Then $A(r) + \varepsilon(r, n) \gtrsim n^{-\xi / \log(M)}$. \smallskip \item[(iii)] \underline{Polynomial Decay: $A(r) \asymp r^{-\xi}$.} Then $A(r) + \varepsilon(r, n) \gtrsim \log^{-\xi}(n)$. \smallskip \item[(iv)] \underline{Logarithmic Decay: $A(r) \asymp \log^{-\xi}(r)$.} Then $A(r) + \varepsilon(r, n) \gtrsim [\log\log(n)]^{-\xi}$. \end{itemize} \end{theorem} Comparing Theorems \ref{thm:binary_zero_tau} and \ref{thm:lower_bound_binary_tau2_zero}, we see the upper and lower bounds on $L_n$ match except when $A(r)\asymp e^{-\xi r}$ and $\xi\ge \log(M)$. Also comparing both theorems to Theorem \ref{th:disc} with $\tau^2>0$, it is not surprising that the prediction error can achieve the same rate as that in Theorem \ref{th:disc} with polynomial or logarithmic rates. This is because the estimation error when $\tau^2=0$ converges to zero more quickly than when $\tau^2 > 0$, as shown in \eqref{eq:upper_bound_rate}. However, in Theorem \ref{thm:binary_zero_tau} with polynomial or logarithmic rates, we allow $R_n = \log(n)/\log(M)$, and thus the number of unknown parameters $M^{R_n}$ can be the same order as the sample size $n$. When the resolution bias $A(r)$ decays exponentially, the prediction error is able to achieve faster rate than that in Theorem \ref{th:disc}. More importantly, when the resolution bias $A(r)$ has a hard threshold or decays exponentially more quickly than $M^{-r}$, then the prediction error can achieve the usual rate $n^{-1}$, and the resolution $R_n$ is allowed to even be infinity. In particular, with infinite resolution, for each individual of interest, we are essentially trying to find the training samples that are closest to this individual (in terms of exactly the same covariates up to a certain resolution), and use the average response from these training samples as our prediction. This is similar to the discussion in Section \ref{sec:linear_zero_tau}, where the usual bias-variance trade-off now puts all its considerations on the bias term in the deterministic world. Finally, we remark on the construction of model \eqref{eq:binary_model} with specific resolution bias $A(r)$ and ultimate risk $\tau^2$. Let $\beta_0$ be any constant, and $\beta_k = M/\sqrt{M-1}\cdot\sqrt{A(k-1) - A(k)}$ for $k\ge 1$. Define $ Y = \sum_{k=0}^\infty \beta_k [\mathbbm{1}(X_k=1) - M^{-1}] + \eta, $ where $X_1, X_2, \ldots$ are iid uniform on $\{1, 2, \ldots, M\}$, $\eta$ has mean zero and variance $\tau^2$, and $\vec{\bm{X}}_{\infty}$ and $\eta$ are independent. Then the corresponding model \eqref{eq:binary_model} has the desired resolution bias and ultimate risk. \section{From the Past to Future}\label{sec:main_practical} \subsection{A logical consequence of the large-$p$-small-$n$ framework}\label{sec:unintend} We appreciate the value of permitting $p$ to vary with $n$ as a \textit{mathematical strategy} for approximations, because it can capture the magnitude of $p$ in relation to $n$ toward determining which approximation terms can or cannot be ignored. But the same cannot be said about the \textit{statistical understanding} of the behavior of the resulting model in real applications. As discussed in Section \ref{sec:theory_linear} and further argued below, this is not merely a logical or philosophical issue, but an issue of revealing correctly the actual behavior of our prediction models in practice. Specifically, for most practical problems, the underlying generative models, however the way nature adopts or we conceptualize them, precede our data collection effort. We therefore can permit our data collection process to be influenced by the generative model, but not vice versa. Nature does not alter its behavior in anticipation of the sample size we may choose. Consequently, when we assume $p> n$ and permit $n\rightarrow \infty$, it forces the logical conclusion that $p=\infty$, if $p$ indexes a feature of nature's generative model. One may argue that $p$ in the large-$p$-small-$n$ asymptotics should not be conceptualized as an index of nature's behavior, but only as a human's approximation, like our primary resolution $R_n$. However, in the large-$p$-small-$n$ framework, it is often assumed that the amount of total variation in the outcome that can be explained by $p$ predictors is a constant when we increase $n$ and hence $p$ because $p$ grows with $n$ \citep[e.g.,][]{belkin2019reconciling,hastie2019surprises}. But if $p$ is meant to represent the number of predictors we humans use for predicting an outcome, then this assumption of fixed explainability defeats the purpose of using more predictors to improve the explainability of the predictors. When our mathematical formulation prohibits improvements, the resulting theoretical results may mislead us when they are used for building our intuitions, even though they may provide useful mathematical approximations for computational purposes. As an illustration, let $\delta_i^2=\mathbb{E}[(\mu_{i}-\mu_{i-1})^2]$, which measures the incremental contribution of the information in ${\cal F}_{i}$ in excess to that in ${\cal F}_{i-1}$ for explaining the variability in $Y$ (over the population as defined by ${\cal F}_0$). Taking $r=0$ in (\ref{eq:keyi}), we have \begin{equation}\label{eq:keyii} \mathbb{V}(Y|{\cal F}_0)\equiv \sigma^2_0=\mathbb{E}[\sigma^2_{\infty}]+\sum_{i=1}^{\infty}\delta^2_i. \end{equation} This implies that, as $i$ increases, $\delta_i$ must be vanishingly small when $\mathbb{V}(Y|{\cal F}_0)< \infty$, a trivial condition for virtually all real-life problems. This implies that the value of $p$ in the current large-$p$-small-$n$ regime cannot possibly be a sensible index of model complexity to be used in linear fashion, because increasing, say, from $p=2$ to $p=4$ could be far more consequential than moving from $p=22$ to $24$. Yet it has been a common practice in the current literature of machine learning or statistics to plot prediction errors against $p$. It is therefore refreshing to see some recent work for studying and plotting the error against more meaningful indexes, such as a spectral decay in \cite{liang2019just}. More broadly, the predictability of any set of covariates depends on at least (I) how any of them influence the outcome in the absence of other predictors and (II) how they are related to each other. Neither of the two can be adequately captured in general by merely their size. In this article we therefore adopt the direct measure of the decay rate in prediction error as we increase the resolution level (e.g., employing more predictors). As demonstrated in Theorems \ref{th:cont}--\ref{thm:lower_bound_binary_tau2_zero}, this resolution decay rate plays a critical role in determining the optimal resolution, as well as in revealing further some problematic aspects of the current large-$p$-small-$n$ framework. \subsection{Applications to personalized treatment} This work was initiated by the need for establishing a statistically principled and scientifically sound theory of personalized treatments \citep{meng2014trio}. Therefore, we provide a very brief review of two types of methods in the literature. The first type focuses on modeling the potential outcome of each patient given his or her covariates under each treatment arm, and it uses the resulting predictions to identify optimal treatment regimes; see \citet{murphy2003optimal, robins2004optimal, zhao2009reinforcement} and \citet{Metalearners2019}. The second type focuses on a posited class of treatment regimes and tries to find the one that maximizes the overall outcome for all units; see \citet{zhao2012estimating, laber2015tree} and \citet{kosorok2019precision}. Our results provide useful theoretical guidance and insight to both types of applications, because they are applicable to different populations of interest or target individuals, as captured by ${\cal F}_0$ and ${\cal F}_\infty$ respectively. For either approach, the key feature of our framework is the complete avoidance of imposing a relationship between $p$ and $n$, and hence it is suitable for investigating an arbitrarily large number of covariates. Indeed, as we have seen in Sections~\ref{sec:theory_linear}-\ref{sec:theory_tree}, the MR framework can handle predictions with potentially infinitely many covariates. \subsection{The method of sieves for infinite-dimension estimation}\label{sec:sieve} The method of sieves \citep{grenander1981} deals with infinite-dimension estimation problems, by restricting the parameter estimation to a subset of the parameter space whose dimension grows with the sample size at some judiciously chosen rates \citep[e.g.,][]{Geman1982, shen1994,shen1997,johnstone2011gaussian}. The sequence of the subsets is then called a sieve, which can be viewed as a counterpart to MR's information filtration indexed by the resolution level $r$. Whereas wavelets and sieve methods share similar mathematical constructs, our focus differs from the classical literature on sieves in several ways. First, we focus on prediction instead of parameter estimation. Second, for non-/semi-parametric estimation, the sieves for certain functional classes are well-understood. Under the MR framework, the resolution bias due to a sieve is generally more complicated, and the order of the covariates or equivalently the choice of sieve plays an important role in prediction error, as shown in Theorem \ref{th:ordering}. Third, we try to understand both sufficient and necessary conditions for asymptotically optimal prediction (as in Theorems \ref{th:cont}--\ref{thm:lower_bound_binary_tau2_zero}), where the literature on sieves typically focus on upper bounds for the estimation convergence rate. \subsection{Much more work is needed} A most needed theoretical insight is on deciding a reasonable ordering in practice, going beyond the results in Section~\ref{sec:ordering}. We do not expect any kind of ``automated choice" results, in theory or in practice, because of the no-free lunch principle. Since it is impossible to have a direct learning population, judgements and assumptions are inevitable. However, it is possible to obtain relatively general results for some specified (and practically meaningful) problems. {\color{black} Moreover, one may borrow ideas from regularization methods in the large-$p$-small-$n$ framework, which can explore all possible choices of the subsets of the covariates (e.g., $2^p$ in Lasso) without any pre-ordering. How to do so effectively within the MR framework is a challenging problem given $p$ potentially is $\infty$, although the observed number of covariates is always finite in practice.} As mentioned earlier, we were intrigued by the world without variance. We wonder, without ever being able to determine which world we are in, how could we be allowed to see its consequences? The answer seems to lie in the fact that $\sigma^2_\infty=0$ is a necessary but not sufficient condition for the no bias-variance trade-off phenomenon. As seen in the bottom row of Figure~\ref{fig:estimate_pred_loss}, this phenomenon did not occur when $A(r)$ decays too slowly, e.g., polynomially or logarithmically. {\color{black} Note that we can always artificially create infinitely many covariates by certain series expansions of the basic covariates. The observation in the world without variance should motivate us to investigate the performance of non-parametric sieve regression when the response is indeed a deterministic function of the covariates.} This observation also suggests the possibility for a black-box procedure to resist (empirically verifiable) over-fitting, when the number of patterns detectable with sufficient frequencies is far fewer than theoretically possible. In such cases, \textit{exhaustive learning} is practically possible with sufficiently large training samples, hence there is no need for ``intrinsic variance" to capture model imperfection, avoiding the creation of a petri dish for over-fitting. This possibility suggests a systematic investigation of the deterministic MR framework for complex machine learning models to see if it indeed provides an alternative explanation of the over-fitting resistant nature of these models. \section*{Acknowledgments} The authors thank colleagues, especially James Bailie, Robin Gong, Tengyuan Liang, and Kai Zhang, as well as several meticulous reviewers for encouragements and comments, which have greatly improved both the content and presentation. They also acknowledge partial financial support from NSF grants. \bibliographystyle{plainnat}
proofpile-arXiv_059-15733
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} \label{intro} Extension of the north-east corner of the nuclear chart is one of the prime focuses of nuclear scientists involved in accelerator and detection laboratories. The knowledge obtained so far in this realm for superheavy nuclei (SHN) is a consequence of coherent progress of experimental and theoretical approaches since last 40 years, which is storied and gathered in Refs~\cite{hofmann2000,hamilton2013,ogan2015,heenen2015,oganrpp2015,oganpt2015,hofmann2016,dull2018,nazar2018,giuliani2019}. The transfermium elements (Z$>$100) up to Z$=$106 were created by irradiations of reactor bred actinide targets with light projectiles, typically O or N \cite{seaborg1985}. Elements up to Z$=$113 are synthesized by cold fusion with Pb or Bi targets and appropriate beams such as Ni and Zn \cite{hofmann2000,morita2004,martens2019} whereas the elements beyond are produced by hot fusion with $^{48}$Ca beams together with suitable actinide targets \cite{ogan2015,heenen2015,oganrpp2015,oganpt2015,giuliani2019,ogan2011,rudolph2013,utyonkov2018,yu2018}.\par To plan and execute the above-mentioned experiments, a deep knowledge of the decay modes and half-lives of nuclei in a very wide range of nuclear chart is necessary. $\alpha$-decay and spontaneous fission (SF) play a crucial role in the detection of these nuclei in the laboratories as they compete with each other \cite{hofmann2000,hamilton2013,ogan2015,heenen2015,giuliani2019,rudolph2013,utyonkov2018,Oganessian2009}. Another decay mode that is speculated to provide a reach to the nuclei which are not in the original $\alpha$-decay chains is weak-decay ($\beta$-decay) \cite{hofmann2016,karpov2012,zagrebaev2012,ogan2011}. Some theoretical predictions of $\beta$-decay in superheavy nuclei have already been made in Refs. ~\cite{heenen2015,karpov2012,hirsch1993,moller2019,sarriguren2019}. Yet, there is a need of systematic investigation which puts weak-decay on the same ground with $\alpha$-decay and SF so that the chances of weak decay could be explored in superheavy region. With this objective, we employ relativistic mean-field theory (RMF) along with empirical formulas of $\alpha$-decay, $\beta$-decay and SF to calculate probable decay modes and half-lives for the nuclei in the range 101$\leq$Z$\leq$109.\par \section{Formalism and Calculations} \subsection{Relativistic Mean-Field Theory} RMF calculations have been carried out using the model Lagrangian density with nonlinear terms both for the ${\sigma}$ and ${\omega}$ mesons as described in detail in Refs.$~$\cite{Singh2013,Yadav2004}. \begin{small} \begin{eqnarray} {\cal L}& = &{\bar\psi} [\imath \gamma^{\mu}\partial_{\mu} - M]\psi\nonumber\\ &&+ \frac{1}{2}\, \partial_{\mu}\sigma\partial^{\mu}\sigma - \frac{1}{2}m_{\sigma}^{2}\sigma^2- \frac{1}{3}g_{2}\sigma ^{3} - \frac{1}{4}g_{3}\sigma^{4} -g_{\sigma} {\bar\psi} \sigma \psi\nonumber\\ &&-\frac{1}{4}H_{\mu \nu}H^{\mu \nu} + \frac{1}{2}m_{\omega} ^{2}\omega_{\mu}\omega^{\mu} + \frac{1}{4} c_{3} (\omega_{\mu} \omega^{\mu})^{2} - g_{\omega}{\bar\psi} \gamma^{\mu}\psi \omega_{\mu}\nonumber\\ &&-\frac{1}{4}G_{\mu \nu}^{a}G^{a\mu \nu} + \frac{1}{2}m_{\rho} ^{2}\rho_{\mu}^{a}\rho^{a\mu} - g_{\rho}{\bar\psi} \gamma_{\mu}\tau^{a}\psi \rho^{\mu a}\nonumber\nonumber\\ &&-\frac{1}{4}F_{\mu \nu}F^{\mu \nu} - e{\bar\psi} \gamma_{\mu} \frac{(1-\tau_{3})} {2} A^{\mu} \psi\,\, \end{eqnarray} \end{small} where the field tensors $H$, $G$ and $F$ for the vector fields are defined by \begin{small} \begin{eqnarray} H_{\mu \nu} &=& \partial_{\mu} \omega_{\nu} - \partial_{\nu} \omega_{\mu}\nonumber\\ G_{\mu \nu}^{a} &=& \partial_{\mu} \rho_{\nu}^{a} -In t \partial_{\nu} \rho_{\mu}^{a} -2 g_{\rho}\,\epsilon^{abc} \rho_{\mu}^{b} \rho_{\nu}^{c} \nonumber\\ F_{\mu \nu} &=& \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}\,\,\nonumber\ \end{eqnarray} \end{small} and other symbols have their usual meaning. The corresponding Dirac equations for nucleons and Klein-Gordon equations for mesons obtained with the mean-field approximation are solved by the expansion method on the widely used axially deformed Harmonic-Oscillator basis \cite{Geng2003,Gambhir1989}. The quadrupole constrained calculations have been performed for all the nuclei considered here in order to obtain their potential energy surfaces (PESs) and determine the corresponding ground-state deformations \cite{Geng2003,Flocard1973}. For nuclei with an odd number of nucleons, a simple blocking method without breaking the time-reversal symmetry is adopted \cite{Geng2003wt,Ring1996}. In the calculations we use for the pairing interaction a delta force, i.e., V = -V$_0 \delta(r)$ with the strength V$_0$ = 350 MeV-fm$^3$ which has been used in Refs.$~$ \cite{Yadav2004,Saxena2017} for the successful description of bubble nuclei \cite{saxena,saxena1,saxenajpg} and also in superheavy nuclei \cite{saxenaijmpe2018,saxenaijmpe2019}. Apart from its simplicity, the applicability and justification of using such a $\delta$-function form of interaction has been discussed in Ref.$~$\cite{Dobaczewski1983}, whereby it has been shown in the context of HFB calculations that the use of a delta force in a finite space simulates the effect of finite range interaction in a phenomenological manner (see also \cite{Bertsch1991} for more details). Whenever the zero-range $\delta$ force is used either in the BCS or the Bogoliubov framework, a cutoff procedure must be applied, i.e. the space of the single-particle states where the pairing interaction is active must be truncated. This is not only to simplify the numerical calculation but also to simulate the finite-range (more precisely, long-range) nature of the pairing interaction in a phenomenological way \cite{Dobaczewski1995,Goriely2002}. In the present work, the single-particle states subject to the pairing interaction is confined to the region satisfying \begin{small} \begin{equation} \epsilon_i-\lambda\le E_\mathrm{cut}, \end{equation} \end{small} where $\epsilon_i$ is the single-particle energy, $\lambda$ the Fermi energy, and $E_\mathrm{cut} = 8.0$ MeV. The center-of-mass correction is approximated by \begin{small} \begin{equation} E_{\textrm{cm}} = -\frac{3}{4}41A^{-1/3}, \end{equation} \end{small} which is often used in the relativistic mean-field theory among the many recipes for the center-of-mass correction \cite{Bender1999}. For further details of these formulations, we refer the reader to Refs.$~$\cite{Gambhir1989,Singh2013,Geng2003}. \subsection{$\alpha$-Decay} The energy release $Q_\alpha$ in ground-state to ground-state decay is obtained from mass excesses or total binding energies through \begin{small} \begin{eqnarray} Q_\alpha(Z, N) & = & M(Z, N) - M(Z-2, N-2) - M(2, 2) \nonumber\\ & =& B.E.(Z-2, N-2) + B.E.(2, 2) - B.E.(Z, N) \label{qalpha} \end{eqnarray} \end{small} where the $^{4}He$ mass excess M(2,2) is 2.42 MeV and the binding energy B.E.(2,2) is 28.30 MeV. To calculate log$_{10}T_{\alpha}$, we use recently reported modified Royer formula by Akrawy \textit{et al.} \cite{Akrawy2017}. \begin{small} \begin{equation} log_{10}T_{\alpha}(sec) = a + bA^{1/6}\sqrt{Z} + \frac{cZ}{\sqrt{Q_{\alpha}}}+ dI + eI^{2} \label{alpha} \end{equation}\end{small} where I $=$ $\frac{N-Z}{A}$ and the constants a, b, c, d, and e are\\ \begin{table}[!htbp] \centering \resizebox{0.6\textwidth}{!}{% {\begin{tabular}{cccccc} \hline \multicolumn{1}{c}{Nuclei (Z$-$N)}& \multicolumn{1}{c}{a}& \multicolumn{1}{c}{b}& \multicolumn{1}{c}{c}& \multicolumn{1}{c}{d}& \multicolumn{1}{c}{e}\\ \hline $e-e$&-27.837&-0.9420&1.5343&-5.7004&8.785\\ $o-e$&-26.801&-1.1078&1.5585&14.8525&-30.523\\ $e-o$&-28.225&-0.8629&1.5377&-21.145&53.890\\ $o-o$&-23.635&-0.891&1.404&-12.4255&36.9005\\ \hline \end{tabular}}} \end{table} \subsection{Spontaneous Fission} The decay mode which is equally important as that of $\alpha$-decay in superheavy region is the spontaneous fission (SF), for which the calculation of half-life is proposed very first by Swiatecki\cite{wjswiatecki1955} based on the fission barrier heights and the values of the fissility parameter $Z^2/A$ and subsequently many other attempts \cite{dwdorn1961,cxu2005} are made to improve the formula. For our investigation, we use the formula given by Karpov \textit{et al.} \cite{karpov2012} \begin{small} \begin{eqnarray} log_{10}T_{SF}(sec) & = & 1146.44 - 75.3153Z^2/A + 1.63792(Z^2/A)^2 - 0.0119827 (Z^2/A)^3\nonumber\\ & &+B_f (7.23613 - 0.0947022Z^2/A)\nonumber\\ & &+\begin{cases} \mbox{0, Z and N are even}\\ \mbox{1.53897, A is odd}\\ \mbox{0.80822, Z and N are odd}. \end{cases} \label{TSF} \end{eqnarray}\end{small} Here $B_f$ is the fission barrier, which is calculated as a sum of the liquid-drop barrier $B_f(LDM)$ and the ground state shell correction $\delta U(g.s.)$, i.e. $B_f$ = $B_f (LDM)$ + $\delta U(g.s.)$ \cite{karpov2012}. For our calculation, we take fission barrier $B_f$ directly from the Ref.$~$ \cite{moller2009}. \subsection{$\beta$-Decay \& Electron Capture (Weak-decay)} The energy released in ground-state to ground-state electron decay ($\beta$-decay) is given in terms of the atomic mass excess $M(Z,N)$ or the total binding energy $B.E.(Z,N)$ by \begin{small} \begin{equation} \begin{aligned}[b] Q_{\beta^-} & = M(Z, N) - M(Z+1, N-1) \\ & = B.E.(Z+1, N-1) - B.E.(Z, N) + M_n -M _H \end{aligned}\label{qbetaminus} \end{equation} \end{small} whereas in positron decay ($\beta^+$-decay) it is \begin{small} \begin{eqnarray} Q_{\beta^+} & = & M(Z, N) - M(Z-1, N+1) - 2m_0c^2 \nonumber \\ & = &B.E.(Z-1, N+1) - B.E.(Z, N) + M _H -M_n -2m_0c^2 \label{qbetaplus} \end{eqnarray} \end{small} For calculating half-lives for $\beta^+$-decay, electron capture must also be considered because in some cases $\beta^+$ decay is energetically forbidden and electron capture (EC) is possible. The energy released in ground-state to ground-state electron capture (EC) is \begin{small} \begin{eqnarray} Q_{EC} & = & M(Z, N) - M(Z-1, N+1) - \mbox{B.E(electron)} \nonumber \\ & = &B.E.(Z-1, N+1) - B.E.(Z, N) + M_H -M_n - \mbox{B.E(electron)} \label{qec} \end{eqnarray} \end{small} so that \begin{small} \begin{eqnarray} Q_{EC} & = Q_{\beta^+} + 2m_0c^2 - \mbox{B.E(electron)} \end{eqnarray} \end{small} To look into the possibility of $\beta$-decay, which is found to be very important for transfermium isotopes ~\cite{heenen2015,karpov2012,hirsch1993,moller2019,sarriguren2019}, we will adopt the empirical formula of Fiset and Nix \cite{Fiset1972} for estimating the $\beta$-decay half-lives. It is worthy to note that this formula of $\beta$-decay has recently been used in one of our work \cite{saxenaijmpe2019} and the work by Ikram \textit{et al.} \cite{Ikram2017}. However, it should be noted here that nuclear structure that generates the energy distribution of the GT strength, plays a very important role for $\beta$-decay and hence for an accurate study of $\beta$-decay one has to consider the structure of parent and daughter nuclei ~\cite{heenen2015,sarriguren2019}. Consequently, the calculation of weak-decay rates requires a knowledge of the final states and of the nuclear matrix elements connecting them to the parent ground states. In practice, even after some approximations are made, there is still some task involved and the results are bound to show some model-dependence \cite{heenen2015}. Therefore, to visualize the probability of $\beta$-decay in a more general manner for transfermium isotopes, we follow Fiset and Nix \cite{Fiset1972} from which if $\beta^{\pm}-decay$ or electron capture to the ground state of the daughter nucleus occurs then the inverse half-lives can be written as: \begin{small} \begin{eqnarray} \frac{1}{T_{\beta}}&=&\frac{1}{f_t}{f(Z_d,W_{\beta})}=\frac{1}{f_t}{C(Z_d,W_{\beta})}{f(0, W_{\beta})}\nonumber\\ &\approx&\frac{1}{f_t}\;\frac{1}{30}{C(Z_d,W_{\beta})}(W_{\beta}/m_e)^5, \label{eqbeta} \end{eqnarray} \noindent \begin{eqnarray} \frac{1}{T_{EC}}&\approx&\frac{1}{f_t} 2\pi(\alpha Z_K)^{2s+1} \left(\frac{2R_0}{\hbar c/ m_e}\right)^{2s-2}\nonumber\\ &&\times\frac{1+s}{\Gamma(2s+1)}\left[\frac{Q_{EC}}{m_e}-(1-s)\right]^2, \label{eqEC} \end{eqnarray} \end{small} where the last form of Equation \ref{eqbeta} is valid for $W_{\beta}\gg m_e$. Here in Equations \ref{eqbeta},\ref{eqEC}, $Z_d$ is the proton number of the daughter nucleus, and $Z_K$ is the effective charge of the parent nucleus for an electron in the K-shell; it is given approximately by $Z_K= Z_P - 0.35$, where $Z_p$ is the proton number of the parent nucleus. The energy $W_{\beta}$ is sum of energy of the emitted $\beta-particle$ and its rest mass $m_e$ i.e. $W_{\beta} = Q_{\beta}+m_{e}$. Also, the quantity s is given by $s = [ 1 -(\alpha Z_k)^2]^{\frac{1}{2}}$ and represents the rest mass of an electron minus its binding energy in the K-shell, in units of $m_e$. The quantity $\alpha$ is the fine-structure constant, and $R_0$ is the nuclear radius, which is taken to be $R_0 = 1.2249 A^{\frac{1}{3}} fm$.\par The function $C(Z_d, W_{\beta}) = f(Z_d, W_{\beta})/f(0, W_{\beta})$ accounts for the increase in the $\beta-decay$ rate arising from the nuclear Coulomb field. The Fermi integral $f(Z_d, W_{\beta})$ arises from integration over the density of states available to the emitted $\beta$-particle and neutrino. This function is absent in electron capture, where the electron is initially in a definite atomic state and consequently the phase-space volume is determined entirely by the energy of the emitted neutrino. This is responsible for the difference in the energy dependences of the half-lives in Eqs. \ref{eqbeta},\ref{eqEC}. $\beta$-decay and electron capture occur not only to the ground states of the daughter nuclei but also to excited states. Following Seeger \textit{et al.} \cite{Seeger1965}, Eqs. \ref{eqbeta},\ref{eqEC} can be integrated over the excitation energy E in the daughter nucleus, under the assumption that $1/3$ of the states in the daughter nucleus are available for such transitions. The energy dependence of the function $C(Z_d, W_{\beta})$ is neglected. This leads to \begin{small} \begin{eqnarray} \frac{1}{T_{\beta}}& \approx & \frac{1}{30}\frac{1}{f_t}{C(Z_d)}\frac{1}{m_e^5}\int_{0}^{Q_{\beta}}(W_{\beta}-E)^5\frac{1}{3}\rho dE \nonumber\\ &\approx&\frac{1}{540}\frac{1}{f_t}{C(Z_d)}\frac{\rho}{m_e^5}(W_\beta^6-m_e^6) \end{eqnarray} \begin{eqnarray*} \frac{1}{T_{EC}}&\approx&\frac{1}{f_t} 2\pi(\alpha Z_K)^{2s+1} \left(\frac{2R_0}{\hbar c/ m_e}\right)^{2s-2}\frac{1+s}{\Gamma(2s+1)}\frac{1}{m_e^2}\\ &&\times\int_{0}^{Q_{EC.-(1-s)m_e}}\left[Q_{EC}-(1-s)m_e-E\right]^2\frac{1}{3}\rho dE\\ &\approx&\frac{1}{9} \frac{1}{f_t} 2\pi(\alpha Z_K)^{2s+1}\left(\frac{2R_0}{\hbar c/ m_e}\right)^{2s-2}\frac{1+s}{\Gamma(2s+1)}\frac{\rho}{m_e^2}\\ &&\times\left[Q_{EC}-(1-s)m_e\right]^3. \end{eqnarray*} \end{small} For the average density of states $\rho$ in the daughter nucleus, we use the empirical results given by Seeger \textit{et al.} \cite{Seeger1965}, which are mentioned in Table 1. \begin{table}[!htbp] \centering \caption{Density of nuclear states ($e^{-A/290}\times$ number of states within 1 MeV of ground state). } \resizebox{0.5\textwidth}{!}{% \begin{tabular}{c c c c c} \hline Nuclear & \multicolumn{3}{c}{Spherical} & Deformed \\ \cline{2-4} species & doubly & singly & neither & \\ & magic & magic & magic & \\ \hline even & 0.22 & 0.97 & 1.36 & 2.73 \\ odd-mass & 0.60 & 1.67 & 5.0 & 8.6 \\ odd & 7.5 & 8.6 & 15.0 & 15.0 \\ \hline \end{tabular}} \end{table} Upon inserting the values $f_t = 10^{6.5} s$ and $C(Z_d) = 10^{1.5}$, we are led finally to \begin{small} \begin{equation} \begin{aligned}[b] T_{\beta}&= \frac{540 m_e^5}{\rho(W_\beta^6-m_e^6)}\times 10^{5.0}s \end{aligned} \label{tbeta} \end{equation} \begin{eqnarray} T_{EC} &= &\frac{9 m_e^2}{2\pi(\alpha Z_K)^{2s+1}\rho\left[Q_{EC}-(1-s)m_e\right]^3} \left(\frac{2R_0}{\hbar c/ m_e}\right)^{2-2s} \nonumber\\ &&\times\frac{\Gamma(2s+1)}{1+s}\times10^{6.5}s. \label{tecfinal} \end{eqnarray} \end{small} We have concentrated here on electron capture from the K-shell because it is usually the predominant $\beta$-decay process, however, electron capture from the $L_1$ shell also occurs for superheavy nuclei with a relative probability of about 20\%. In this paper, we will follow Eqn. \ref{tbeta} to calculate half-lives for $\beta^-$-decay and $\beta^+$-decay whereas the Eqn. \ref{tecfinal} will be used to calculate half-life for electron capture. The half-life with respect to $\beta^+$/EC-decay is given by \begin{small} \begin{eqnarray} \frac{1}{T_{\beta^+/EC}} = \frac{1}{T_{\beta^+}} + \frac{1}{T_{EC}} \label{tbetaplusec} \end{eqnarray} \end{small} \section{Results and discussions} The term "transfermium" describes the elements with Z$>$100(Fermium). Therefore, in this paper we have considered the isotopes of Md, No, Lr, Rf, Db, Sg, Bh, Hs, and Mt (101$\leq$Z$\leq$109) that can be produced in the frontier of cold and hot fusion reactions and recently attracted the superheavy world with a great interest \cite{yu2018,martens2019,hong2016,hong2017}. Though the superheavy world has reached the Z$=$118 \cite{ogan2006}, still there are a lot of nuclei that are yet to be explored through their decay modes, lifetimes, and other properties. Therefore, this article aims to probe the undiscovered land of elements of 101$\leq$Z$\leq$109 including odd and even nuclei in the range of $^{235-268}$Md, $^{238-268}$No, $^{241-270}$Lr, $^{243-272}$Rf, $^{245-272}$Db, $^{248-276}$Sg, $^{250-278}$Bh, $^{253-282}$Hs, and $^{255-284}$Mt. For all these nuclei, the calculations are done using two parameters i.e. NL3* \cite{nl3star} and TMA \cite{sugaTMA} of relativistic mean-field theory (RMF) \cite{Yadav2004,Saxena2017,saxenaijmpe2018,saxenaijmpe2019} as explained above. At places, we will compare our results with available experimental data \cite{nndc}. In addition, few of our results are also compared with Hartree-Fock-Bogoliubov (HFB) mass model with HFB-24 functional \cite{hfbxu}, relativistic continuum Hartree-Bogoliubov (RCHB) theory with the relativistic density functional PC-PK1 \cite{rchb2018}, nuclear mass table with the global mass formula WS4 \cite{ws42014}, and, recently reported Finite Range Droplet Model (FRDM) calculations \cite{moller2019}.\par \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{Fig.1.eps} \caption{(Colour online) Difference of calculated and experimental \cite{nndc} Q-values for $\alpha$, $\beta^-$ and $\beta^+$/EC-decays.} \label{fig1} \end{figure} \begin{table*}[!htbp] \caption{Root mean square error (RMSE) of Q values for $\alpha$, $\beta^-$ and $\beta^+$/EC-decays for each isotopic chain.} \centering \def1.0{1.0} \resizebox{1.0\textwidth}{!}{% {\begin{tabular}{cccccccccccccccccc} \hline \multicolumn{1}{c}{Nucleus}& \multicolumn{5}{c}{Q$_{\alpha}$}& \multicolumn{1}{c}{}& \multicolumn{5}{c}{Q$_{\beta^{-}}$}& \multicolumn{1}{c}{}& \multicolumn{5}{c}{Q$_{\beta^{+}/EC}$}\\ \cline{2-6} \cline{8-12} \cline{14-18} &TMA&NL3*&RCHB&WS4&FRDM&&TMA&NL3*&HFB&RCHB&WS4&&TMA&NL3*&RCHB&WS4&FRDM\\ \cline{1-6} \cline{8-12} \cline{14-18} Md & 0.29 & 0.57& 1.24 & 0.24 & 0.28 && 0.25 & 0.36 & 0.24& 0.83&0.19 && 0.39 & 0.50 & 1.10&0.40 & 0.13 \\ No & 0.35 & 0.49& 0.86 & 0.25 & 0.32 && 0.57 & 0.53 & 0.26& 1.04&0.34 && 0.25 & 0.29 & 0.84&0.18 & 0.18 \\ Lr & 0.27 & 0.41& 0.77 & 0.27 & 0.48 && 0.26 & 0.46 & 0.26& 0.74&0.24 && 0.57 & 0.63 & 1.04&0.34 & 0.11 \\ Rf & 0.36 & 0.43& 0.86 & 0.22 & 0.46 && 0.55 & 0.56 & 0.36& 1.06&0.28 && 0.26 & 0.36 & 0.74&0.24 & 0.26 \\ Db & 0.49 & 0.55& 1.05 & 0.25 & 0.50 && 0.21 & 0.81 & 0.54 & 0.59&0.14 && 0.56 & 0.46 & 1.06&0.28 & 0.13 \\ Sg & 0.54 & 0.88& 1.33 & 0.28 & 0.47 && 0.67 & 0.97 & 0.38& 1.12&0.46 && 0.21 & 0.73 &0.59&0.14 & 0.23\\ Bh & 0.72 & 0.53& 1.69 & 0.20 & 0.44 && 0.41 & 0.79 & 0.50& 0.56&0.21 && 0.67 & 0.97 &1.12&0.46 & 0.28 \\ Hs & 0.58 & 0.54& 1.90 & 0.16 & 0.39 && 0.41 & 0.54 & 0.44& 0.65&0.36 && 0.55 & 0.79 &0.56&0.20 & 0.32 \\ Mt & 0.45 & 0.80& 2.49 & 0.32 & 0.60 && 0.36 & 0.98 & 0.43 & 0.58&0.23 && 0.55 & 0.63 &0.65&0.36 & 0.23 \\ \hline \end{tabular}} } \end{table*} First, to validate our results of RMF, we compute the difference of our calculated Q-values using the TMA parameter with the available experimental data \cite{nndc}. The Q-values are calculated for $\alpha$, $\beta^-$, $\beta^+$, and EC-decays using Eqns. \ref{qalpha},\ref{qbetaminus},\ref{qbetaplus},\ref{qec} and the differences are plotted in Fig. \ref{fig1}. In this figure, out of $\beta^+$ and EC-decay, we plot Q-values for EC-decay only (mentioning $\beta^+$/EC decay). From Fig. \ref{fig1} it is indulging to note that our results of Q-values are in excellent agreement as for most of the nuclei the differences of Q-values i.e. Q$_{Expt.}$$-$Q$_{Calc.}$$<$1.0 MeV, which manifests our predictions for decay modes. At this point, it will be more appropriate to compare our calculations with other parameters/theories. Therefore, in Table 2, we have shown the root mean square error (RMSE) of Q values (Q$_{Expt.}$$-$Q$_{RMF}$) for $\alpha$, $\beta^-$ and $\beta^+$/EC-decays for each isotopic chain. The comparison is done with another parameter of RMF i.e. NL3* also with other theories viz. HFB \cite{hfbxu}, RCHB(PC-PK1) \cite{rchb2018}, WS4 \cite{ws42014} and  FRDM \cite{moller2019}. As per the table, it is gratifying to note that RMSE from both the parameters of RMF is found reasonably lower similar to the other theories. However, it is also important to point out here that the TMA parameter, which is a mass-dependent parameter, is found more appropriate in this region as compared to the non-linear variant of the parameter (NL3*). Therefore, in the following, TMA parameter will be considered for the more accurate prediction of decay modes.\par As already mentioned in the introduction, the competition between the different decay modes is important to determine the stability of a particular nucleus and consequently to provide a reach from a given hot-fusion reaction \cite{hamilton2013,ogan2015,heenen2015,giuliani2019}. Therefore, we employ the calculated Q-values from our theory and the formula mentioned in section II to calculate half-lives of $\alpha$, $\beta^\pm$, and EC-decays using Eqns. \ref{alpha}, \ref{tbeta} and \ref{tecfinal}, respectively. Similarly, the half-life for spontaneous fission is calculated using Eqn. \ref{TSF}. These all half-lives are tabulated together to demonstrate their competence and to bring the most favourable or probable decay mode for each nucleus in considered isotopic chains. In order to compare and validate our results of decay modes and half-life of probable decay mode, we first compare our theoretical prediction with available experimental data \cite{nndc} in Table 3. For this comparison only those nuclei are taken into consideration from the experimental database \cite{nndc} which have more than 50$\%$ probability of any particular decay mode or the decay modes are comparable.\par \begin{center} \small \setlength{\tabcolsep}{3pt} \begin{longtable}{|c|c|c|c|c|c|c|c|c|c|} \caption{Comparison of decay-modes and half-lives with our theoretical prediction for transfermium isotopes. Experimental decay modes and half-lives are taken from \cite{nndc}.}\\ \hline \multicolumn{1}{|c}{Nucleus}& \multicolumn{5}{|c}{Log T$_{1/2}$}& \multicolumn{1}{|c}{Predicted}& \multicolumn{1}{|c}{Expt.}& \multicolumn{1}{|c}{Predicted}& \multicolumn{1}{|c|}{Expt.}\\ \cline{2-6} \multicolumn{1}{|c}{}& \multicolumn{1}{|c}{Log T$_\alpha$}& \multicolumn{1}{|c}{Log T$_{\beta^-}$}& \multicolumn{1}{|c}{Log T$_{\beta^+}$}& \multicolumn{1}{|c}{Log T$_{EC}$}& \multicolumn{1}{|c}{Log T$_{SF}$}& \multicolumn{1}{|c}{Decay-Mode}& \multicolumn{1}{|c}{Decay-Mode}& \multicolumn{1}{|c}{T$_{1/2}$}& \multicolumn{1}{|c|}{T$_{1/2}$}\\ \hline \endfirsthead \multicolumn{10}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \multicolumn{1}{|c}{Nucleus}& \multicolumn{5}{|c}{Log T$_{1/2}$}& \multicolumn{1}{|c}{Predicted}& \multicolumn{1}{|c}{Expt.}& \multicolumn{1}{|c}{Predicted}& \multicolumn{1}{|c|}{Expt.}\\ \cline{2-6} \multicolumn{1}{|c}{}& \multicolumn{1}{|c}{Log T$_\alpha$}& \multicolumn{1}{|c}{Log T$_{\beta^-}$}& \multicolumn{1}{|c}{Log T$_{\beta^+}$}& \multicolumn{1}{|c}{Log T$_{EC}$}& \multicolumn{1}{|c}{Log T$_{SF}$}& \multicolumn{1}{|c}{Decay-Mode}& \multicolumn{1}{|c}{Decay-Mode}& \multicolumn{1}{|c}{T$_{1/2}$}& \multicolumn{1}{|c|}{T$_{1/2}$}\\ \hline \endhead \hline \multicolumn{10}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot $^{245}$Md & 0.48& -& 1.71& 2.49 & 3.51 & $\alpha$ & SF/$\alpha$ & 3.02s & (0.90$\pm$0.25)ms\\ $^{246}$Md & 0.73& -& 0.68& 1.88 & 4.27 & $\beta^{+}/EC$/$\alpha$ & $\alpha$ & 4.73s & (0.9$\pm$0.2)s \\ $^{247}$Md & 1.03& -& 1.92& 2.58 & 6.21 & $\alpha$ & $\alpha$ & 10.80s & (1.2$\pm$0.1)s \\ $^{248}$Md & 1.65& -& 0.99& 2.03 & 6.76 & $\beta^{+}/EC$/$\alpha$ & $\alpha$/$\beta^{+}/EC$ & 9.75s & $(13^{+15}_{-4})$s \\ $^{249}$Md & 0.84& -& 2.47& 2.83 & 8.53 & $\alpha$ & $\alpha$/$\beta^{+}/EC$ & 6.89s & (21.7$\pm$2.0)s \\ $^{250}$Md & 1.51 & - & 1.54 & 2.28 & 9.68 & $\alpha$/$\beta^{+}/EC$ &$\beta^{+}/EC$ & 32.28s &$(25^{+10}_{-5})$s \\ $^{251}$Md & 1.90 & - & 3.45 & 3.27 & 11.71 & $\alpha$ &$\beta^{+}/EC$ & 79.89s & (4.27$\pm$0.26)m \\ $^{252}$Md & 2.57 & - & 2.24 & 2.60 & 12.20 & $\beta^{+}/EC$/$\alpha$ &$\beta^{+}/EC$ & 2.89m & (2.3$\pm$0.8)m \\ $^{253}$Md & 3.20 & - & 4.65 & 3.76 & 12.55 & $\alpha$/$\beta^{+}/EC$ &$\beta^{+}/EC$ & 26.40m & $(6^{+12}_{-3})$m \\ $^{254}$Md & 4.60 & - & 3.49 & 3.15 & 10.82 & $\beta^{+}/EC$/$\alpha$ &$\beta^{+}/EC$ & 23.54m & (28$\pm$8)m \\ $^{255}$Md & 4.61 & - & - & 6.83 & 10.27 & $\alpha$ &$\beta^{+}/EC$ & 11.30h & (27$\pm$2)m \\ $^{256}$Md & 4.53 & - & 4.60 & 3.60 & 8.40 & $\beta^{+}/EC$ &$\beta^{+}/EC$ & 65.70m & (77$\pm$2)m \\ $^{257}$Md & 5.48 & - & - & - & 7.42 & $\alpha$ &$\beta^{+}/EC$ & 84.79h & (5.52$\pm$0.05)h \\ $^{258}$Md & 6.74& 4.92& 6.79& 4.27 & 6.19 & $\beta^{+}/EC$/$\beta^{-}$ &$\beta^{+}/EC$ & 5.17m & (57.5$\pm$0.9)m \\ $^{259}$Md & 6.57& -& -& - & 6.70 & $\alpha$/SF & SF & 2.50h & (1.6$\pm$0.6)h \\ $^{260}$Md & 8.33 &3.78 & -& -& 7.24 & $\beta^{-}$ & SF/$\alpha$/$\beta$ &0.07d &(31.8$\pm$0.5)d\\ \hline $^{250}$No & 0.13& -& 0.18 & 0.41 & 4.38 & $\alpha$/$\beta^{+}/EC$ & SF & 1.36s & $(4.2^{+1.2}_{-0.9})$$\mu$s \\ $^{251}$No & 0.32& -& 0.43 &0.57 & 7.61 & $\alpha$/$\beta^{+}/EC$ & $\alpha$ & 2.09s & (0.80$\pm$0.01)s \\ $^{252}$No & 0.59& -& -0.04& 0.28& 7.22 & $\beta^{+}/EC$/$\alpha$ & $\alpha$ & 0.90s & (2.44$\pm$0.04)s \\ $^{253}$No & 1.37& -& 0.29 &0.47 & 10.52 & $\beta^{+}/EC$ & $\alpha$/$\beta^{+}/EC$ & 1.95s & (1.62$\pm$0.15)m\\ $^{254}$No & 1.68& -& -0.46& 0.14& 8.75 & $\beta^{+}/EC$ & $\alpha$/$\beta^{+}/EC$ & 0.35s & (51$\pm$10)s\\ $^{255}$No & 3.20 & - & 0.01 & 0.31 & 9.34 & $\beta^{+}/EC$ & $\beta^{+}/EC$/$\alpha$ & 1.03s & (3.52$\pm$0.21)m \\ $^{256}$No & 3.11& -& - &-0.64& 6.65 & $\beta^{+}/EC$ & $\alpha$ & 0.23s & (2.91$\pm$0.05)s\\ $^{257}$No & 3.07& -& -0.51& 0.12& 7.10 & $\beta^{+}/EC$/$\alpha$ & $\beta^{+}/EC$/$\alpha$ & 0.31s & (24.5$\pm$0.5)s\\ $^{258}$No & 3.54& -& - & - & 4.06 & $\alpha$ & SF & 0.96h & (1.2$\pm$0.2)ms \\ $^{259}$No & 5.55& -& - &- & 6.01 & $\alpha$ & $\alpha$ & 97.60h & (58$\pm$5)m\\ $^{260}$No & 4.81& -& - & - & 4.57 & SF & SF & 10.34h & (106$\pm$8)ms\\ $^{262}$No & 6.78& -& - & - & 5.91& SF & SF & 9.32d & 5ms\\ \hline $^{252}$Lr & 1.73 & - &1.11 & 2.01 & 4.81 & $\beta^{+}/EC$/$\alpha$ & $\alpha$ & 12.87s &$(0.36^{+0.11}_{-0.07})$s\\ $^{253}$Lr & 0.80 & - &2.45 & 2.76 & 6.80 & $\alpha$ & $\alpha$ & 6.24s &$(0.57^{+0.07}_{-0.06})$s\\ $^{254}$Lr & 1.48 & - &1.40 & 2.15 & 7.77 & $\beta^{+}/EC$/$\alpha$ & $\alpha$/$\beta^{+}/EC$ & 25.04s &(18.4$\pm$1.8)s\\ $^{255}$Lr & 1.24 & - &2.96 & 2.98 & 8.48 & $\alpha$ & $\alpha$ & 17.45s &(31.1$\pm$1.3)s\\ $^{256}$Lr & 2.75 & - &2.07 & 2.45 & 6.97 & $\beta^{+}/EC$/$\alpha$ & $\alpha$/$\beta^{+}/EC$ & 116.26s &(27$\pm$3)s\\ $^{257}$Lr & 2.32 & - &4.52 & 3.64 & 6.77 & $\alpha$ & $\alpha$ & 208.02s &4s\\ $^{258}$Lr & 1.99 & - &2.76 & 2.76 & 5.08 & $\alpha$ & $\alpha$ & 97.42s &(4.1$\pm$0.3)s\\ $^{259}$Lr & 2.41 & - &8.02 & 4.54 & 5.08 & $\alpha$ & $\alpha$ & 256.07s &(6.2$\pm$0.3)s\\ $^{260}$Lr & 3.46 & - &3.77 & 3.19 & 5.27 & $\alpha$ & $\alpha$ & 47.60m &(180$\pm$30)s\\ $^{261}$Lr & 3.65 & - & - & 6.20 & 6.25 & $\alpha$ & SF & 4.63m &(39$\pm$12)m\\ $^{262}$Lr & 5.02 & - & 4.97 & 3.67 & 6.35 & $\beta^{+}/EC$ & $\alpha$/$\beta^{+}/EC$ & 1.30h&(4.0h) \\ $^{266}$Lr & 7.58 & 7.00 & - & 5.13 & 9.49 & $\beta^{+}/EC$ & SF & 37.86h &$(11^{+21}_{-5})$h\\ \hline $^{253}$Rf & 0.23 & - & 1.55 & 2.32 & 3.09 & $\alpha$ & SF/$\alpha$& 1.71s & $(48^{+17}_{-10})$$\mu$s\\ $^{254}$Rf & -0.17 & - & 3.46 & 3.45 & 2.89 & $\alpha$ & SF & 0.68s & (23$\pm$3)$\mu$s\\ $^{255}$Rf & 0.78 & - & 2.01 & 2.53 & 5.89 & $\alpha$ & SF/$\alpha$& 6.01s & (1.68$\pm$0.09)s\\ $^{256}$Rf & 0.81 & - & 4.07 & 3.72 & 4.69 & $\alpha$ & SF & 6.45s & (6.67$\pm$0.10)ms\\ $^{257}$Rf & 2.04 & - & 2.69 & 2.84 & 5.72 & $\alpha$ & $\alpha$ & 108.83s & $(4.4^{+0.6}_{-0.5})$s\\ $^{258}$Rf & 1.31 & - & 5.88 & 4.45 & 3.26 & $\alpha$ & SF/$\alpha$& 20.29s & $(14.7^{+1.2}_{-0.1})$ms\\ $^{259}$Rf & 1.16 & - & 3.34 & 3.13 & 4.55 & $\alpha$ & $\alpha$ & 14.50s & (2.4$\pm$0.4)s\\ $^{260}$Rf & 1.30 & - & - & 5.35 & 2.86 & $\alpha$ & SF & 19.79s & (21$\pm$1)ms\\ $^{261}$Rf & 2.35 & - & 4.45 & 3.59 & 5.35 & $\alpha$ & $\alpha$ & 224.81s & (68$\pm$3)s\\ $^{262}$Rf & 2.63 & - & - & - & 4.20 & $\alpha$ & SF & 428.21s & (2.3$\pm$0.4)s\\ $^{263}$Rf & 4.45 & - & 7.00 & 4.43 & 6.10 & $\beta^{+}/EC$/$\alpha$ & SF & 7.39h & (10$\pm$2)m\\ $^{265}$Rf & 5.45 & - & - & 5.79 & 8.94 & $\alpha$ & SF & 3.29d & $(1^{+12}_{-3})$m\\ \hline $^{255}$Db & -1.27 & - & 1.58 & 2.29 & 2.83 & $\alpha$ & $\alpha$ & 0.05s & $(1.6^{+0.6}_{-0.4})$s\\ $^{256}$Db & -0.01 & - & 0.74 & 1.77 & 3.54 & $\alpha$ & $\alpha$ & 0.99s & $(1.6^{+0.5}_{-0.4})$s\\ $^{257}$Db & 0.27 & - & 2.00 & 2.48 & 5.06 & $\alpha$ & $\alpha$ & 1.87s & (2.3$\pm$0.2)s\\ $^{258}$Db & 1.72 & - & 1.27 & 2.01 & 3.92 & $\beta^{+}/EC$/$\alpha$ & $\alpha$ & 18.45s & $(4.2^{+0.4}_{-0.3})$s\\ $^{259}$Db & 1.00 & - & 2.90 & 2.89 & 4.10 & $\alpha$ & $\alpha$ & 10.05s & (0.51$\pm$0.16)s\\ $^{260}$Db & 1.07 & - & 1.72 & 2.22 & 3.79 & $\alpha$ & $\alpha$ & 11.64s & (1.52$\pm$0.13)s\\ $^{261}$Db & 1.01 & - & 3.80 & 3.28 & 4.62 & $\alpha$ & $\alpha$ & 10.22s & (1.8$\pm$0.4)s\\ $^{262}$Db & 2.45 & - & 2.39 & 2.53 & 4.74 & $\beta^{+}/EC$/$\alpha$ & $\alpha$ & 243.38s & 35s\\ $^{263}$Db & 2.14 & - & 4.83 & 3.70 & 5.62 & $\alpha$ & SF/$\alpha$& 139.57s & $(27^{+10}_{-7})$s\\ $^{267}$Db & 4.28 & - & - & 5.67 & 10.38 & $\alpha$ & SF & 315.43m & $(73^{+350}_{-33})$m\\ $^{268}$Db & 5.46 & - & 5.09 & 3.65 & 9.99 & $\beta^{+}/EC$/$\alpha$ & SF & 1.24h & $(32^{+11}_{-7})$h\\ $^{270}$Db & 6.43 & 4.88& - & 4.35 & 7.70 & $\beta^{+}/EC$ & $\alpha$ & 6.28h & $(1^{+19}_{-4})$h\\ \hline $^{258}$Sg & -0.36 & - &2.87 &3.11 & 1.66 & $\alpha$ & SF & 0.43s &$(2.9^{+1.3}_{-0.7})$ms\\ $^{259}$Sg & 0.91 & - &1.79 &2.35 & 3.09 & $\alpha$ & $\alpha$ & 8.08s &(0.29$\pm$0.55)s\\ $^{260}$Sg & 0.30 & - &3.88 &3.56 & 1.85 & $\alpha$ & $\alpha$/SF& 2.02s &(4.95$\pm$0.33)ms\\ $^{261}$Sg & 0.47 & - &2.29 &2.58 & 3.76 & $\alpha$ & $\alpha$ & 2.94s &(178$\pm$14)ms\\ $^{262}$Sg & 0.45 & - &4.66 &3.89 & 2.56 & $\alpha$ & SF & 2.81s &$(6.9^{+3.8}_{-1.8})$ms\\ $^{263}$Sg & 1.38 & - &2.80 &2.81 & 5.17 & $\alpha$ & $\alpha$ & 23.85s &(1.0$\pm$0.2)s\\ $^{264}$Sg & 1.12 &- &5.45 &4.21 & 3.28 & $\alpha$ & SF/$\alpha$ & 13.16s & $(37^{+27}_{-11})$ms\\ $^{265}$Sg & 2.27 & - &3.59 &3.16 & 6.01 & $\alpha$ & $\alpha$ & 186.07s &$(14.4^{+3.7}_{-2.5})$s\\ $^{266}$Sg & 2.39 & - & - &5.81 & 6.09 & $\beta^{+}/EC$/$\alpha$ & SF & 245.26s &$(21^{+20}_{-12})$s\\ $^{269}$Sg & 4.46 & - & 7.80 &4.44 & 10.47 & $\alpha$ & $\alpha$ & 7.93h &$(3.1^{+3.7}_{-1.1})$m\\ $^{271}$Sg & 5.16 & - & - &6.12 & 8.35 & $\alpha$ & $\alpha$/SF& 39.93h &(2.4$\pm$4.3)m\\ \hline $^{260}$Bh & 0.65 & - & 0.61 &1.63 & 1.62 & $\beta^{+}/EC$/$\alpha$ & $\alpha$ & 4.07s & $(35^{+19}_{-9})$ms\\ $^{261}$Bh & -0.16 & - & 1.86 &2.35 & 3.01 & $\alpha$ & $\alpha$ & 0.68s & $(11.8^{+3.9}_{-2.4})$ms\\ $^{262}$Bh & 0.65 & - & 1.08 &1.85 & 3.10 & $\alpha$ & $\alpha$ & 4.48s & $(22\pm4)$ms\\ $^{263}$Bh & -0.05 & - & 2.46 &2.62 & 4.06 & $\alpha$ & $\alpha$ & 0.90s & \\ $^{264}$Bh & 1.59 & - & 1.58 &2.09 & 4.16 & $\beta^{+}/EC$/$\alpha$ & $\alpha$ & 37.65s & $(0.44^{+0.60}_{-0.16})$s\\ $^{265}$Bh & 0.63 & - & 3.13 &2.92 & 4.98 & $\alpha$ & $\alpha$ & 4.23s & $(0.9^{+0.7}_{-0.3})$s\\ $^{266}$Bh & 1.70 & - & 1.90 &2.24 & 5.66 & $\alpha$ & $\alpha$ & 50.37s & $(1.7^{+8.2}_{-0.8})$s\\ $^{267}$Bh & 1.56 & - & 3.97 &3.28 & 7.47 & $\alpha$ & $\alpha$ & 36.65s & $(17^{+14}_{-6})$s\\ $^{268}$Bh & 2.26 & - & 2.64 &2.57 & 7.73 & $\alpha$ & $\alpha$ & 183.18s & \\ $^{269}$Bh & 2.60 & - & 5.19 &3.77 & 10.01 & $\alpha$ & $\alpha$ & 398.86s & \\ $^{270}$Bh & 4.01 & - & 3.38 &2.90 & 9.85 & $\beta^{+}/EC$ & $\alpha$ & 786.19s & $(60^{+29}_{-3})$s\\ $^{271}$Bh & 3.86 & - & - &4.66 & 9.89 & $\alpha$ & $\alpha$ & 2.02h & \\ $^{272}$Bh & 4.46 & - & 4.40 &3.32 & 7.97 & $\beta^{+}/EC$/$\alpha$ & $\alpha$ & 34.67m & $(10^{+12}_{-4})$s\\ $^{274}$Bh & 3.95 & - & 5.00 &3.55 & 5.22 & $\beta^{+}/EC$/$\alpha$ & $\alpha$ & 59.43m & $(0.9^{+4.2}_{-0.4})$m\\ \hline $^{263}$Hs & -0.43 & - &1.50 &2.14 & 2.37 & $\alpha$ & $\alpha$ & 0.37s & $(0.74^{+0.48}_{-0.21})$ms\\ $^{264}$Hs & -0.54 & - &3.17 &3.18 & 1.42 & $\alpha$ & $\alpha$/SF & 0.29s & $(0.8^{+3.9}_{-2.4})$ms\\ $^{265}$Hs & 0.00 & - &1.90 &2.33 & 3.58 & $\alpha$ & $\alpha$ & 0.99s & (1.9$\pm$0.2)ms\\ $^{266}$Hs & 0.00 & - &3.95 &3.52 & 2.27 & $\alpha$ & $\alpha$ & 1.00s & $(2.3^{+1.3}_{-0.6})$ms\\ $^{267}$Hs & 0.95 & - &2.36 &2.54 & 4.68 & $\alpha$ & $\alpha$ & 8.96s & $(52^{+13}_{-8})$ms\\ $^{268}$Hs & 0.87 & - &4.71 &3.84 & 3.75 & $\alpha$ & $\alpha$ & 7.36s & $(0.4^{+1.8}_{-0.2})$s\\ $^{269}$Hs & 2.11 & - &3.08 &2.87 & 6.94 & $\alpha$ & $\alpha$ & 128.43s & $(9.7^{+9.3}_{-0.3})$s\\ $^{270}$Hs & 1.25 & - &6.01 &4.36 & 6.69 & $\alpha$ & $\alpha$ & 17.72s & 22s \\ $^{273}$Hs & 2.41 & - &4.73 &3.56 & 7.59 & $\alpha$ & $\alpha$ & 258.14s & $(0.76^{+0.71}_{-0.24})$s\\ $^{275}$Hs & 1.74 & - &5.53 &3.87 & 5.37 & $\alpha$ & $\alpha$ & 55.03s & $(0.15^{+0.27}_{-0.06})$s\\ $^{277}$Hs & 2.64 & - & - &4.53 & 4.58 & $\alpha$ & $\alpha$ & 431.98s & $(3^{+15}_{-1})$ms\\ \hline $^{266}$Mt & -1.52 & - &0.53 & 1.52 & 0.84 & $\alpha$ & $\alpha$ & 0.03s & $(1.7^{+1.8}_{-1.6})$ms\\ $^{267}$Mt & -2.10 & - &1.67 & 2.19 & 2.26 & $\alpha$ & $\alpha$ & 0.01s & \\ $^{268}$Mt & -1.21 & - &0.84 & 1.67 & 2.48 & $\alpha$ & $\alpha$ & 0.06s & $(21^{+8}_{-5})$ms \\ $^{270}$Mt & -0.05 & - &1.29 & 1.88 & 5.56 & $\alpha$ & $\alpha$ & 0.89s & $(5^{+24}_{-3})$ms\\ $^{274}$Mt & 0.43 & - &2.38 & 2.38 & 6.43 & $\alpha$ & $\alpha$ & 2.68s & $(0.44^{+0.81}_{-0.17})$s\\ $^{275}$Mt & -0.14 & - &5.41 & 3.79 & 5.77 & $\alpha$ & $\alpha$ & 0.73s & $(9.7^{+46.0}_{-0.4})$ms\\ $^{276}$Mt & 0.07 & - &2.81 & 2.58 & 3.92 & $\alpha$ & $\alpha$ & 1.18s & $(0.72^{+0.87}_{-0.25})$s\\ $^{277}$Mt & 0.20 & - & - & 4.60 & 3.20 & $\alpha$ & SF & 1.59s & $(5^{+9}_{-2})$s\\ $^{278}$Mt & 0.19 & - &3.12 & 2.71 & 2.09 & $\alpha$ & $\alpha$ & 1.56s & $(8^{+37}_{-4})$s\\ \hline \end{longtable} \end{center} \normalsize In Table 3, half-lives of only those nuclei are mentioned in which decay mode is energetically possible (Q$>$0). Even though $\beta^{+}$-decay and EC-decay are very difficult to distinguish, the half-lives for both these decays are tabulated separately considering the fact that T$_{\beta^+}$ and  T$_{EC}$ are calculated using separate formula \cite{Fiset1972} as mentioned in Eqns. \ref{tbeta} and \ref{tecfinal}. However, in the predicted decay mode $\beta^{+}/EC$ is mentioned always even if any of the two decay modes is possible. From Table 3, a purposeful comparison for T$_\alpha$, T$_{\beta^-}$, T$_{\beta^+}$, T$_{EC}$ and T$_{SF}$ can be done easily and the lowest value of half-life can be determined to lead to the most favourable decay mode, tabulated along with the comparison with experimental decay mode. If the half-lives are very close to each other, then more than one options are tabulated for decay, however, the lowest half-life among these entries is mentioned in the second last column. In the last column, experimental half-lives are given with their errors \cite{nndc}. \par As far as the deviation from the experimental data is concerned it is indeed found of statistical nature. To demonstrate this kind of statistical variation, we calculate uncertainty using the following formula in our calculated Q values and correspondingly in Log T$_{1/2}$. \begin{equation}\label{uncertainty} u = \sqrt{\frac{\sum(x_i - \mu)^2}{n(n-1)}} \end{equation} Where, $x_i$ is the $i^{th}$ reading in the data set, $\mu$ ia the mean of data set and n is the number of readings in the data set. The uncertainty in theoretical values are mentioned in Table 4 along with average uncertainty of concerned nuclei from experiment \cite{nndc}. The Table 4 evidently describes the veracity of our theoretical data and validate the accuracy of our prediction. \begin{table}[!htbp] \centering \caption{Uncertainty in the Q values (MeV) and Log T$_{1/2}$ (s).} \resizebox{0.45\textwidth}{!}{% \begin{tabular}{l|c|c} \hline Quantity & Theoretical & Experimental \\ \hline Q$_{\alpha}$& 0.037 & 0.14 \\ Q$_{\beta^{-}}$& 0.038 & 0.37 \\ Q$_{\beta^{+}/EC}$&0.041 & 0.33 \\ \hline Log T$_{1/2}$ & 0.22 & 0.23 \\ \hline \end{tabular}} \end{table} Further, it is gratifying to note from Table 3 that RMF theory along with the applied phenomenological formulas and considering all the decay modes on equal footing, successfully able to reproduced experimental decay modes and half-lives for most of the considered nuclei. As can be seen from table that $\alpha$-decay and spontaneous fission come forth as good competitors and share most of the nuclei, however, the probability of weak-decay in these transfermium isotopes can not be disregarded. Our theoretical prediction indicates an equal probability of weak-decay as that of $\alpha$-decay and spontaneous fission in several nuclei as mentioned in Table 3. Therefore, our results affirm weak-decay in transfermium isotopes inline with the refs. \cite{hofmann2016,karpov2012,zagrebaev2012,heenen2015,hirsch1993,moller2019,sarriguren2019}.\par Since our calculation of half-life for weak-decay is very crude and does not involve the main ingredients of weak-decay i.e. phase space factor and nuclear structure which is used to render energy distribution of the GT strength \cite{sarriguren2019}, therefore, testing of half-lives of weak-decay becomes essential for further prediction. For this trial, we compare our calculated half-lives of weak-decays with the recent calculations of Gamow–Teller $\beta$-decay rates which are obtained from a quasi-particle random-phase approximation with single-particle levels and wave functions at the calculated nuclear ground-state shapes as input quantities \cite{moller2019}. In Fig. \ref{fig2}, we show a comparison between half-lives calculated in this paper by using an empirical formula of Fiset and Nix \cite{Fiset1972} and the ones obtained from a quasi-particle random-phase approximation (QRPA) \cite{moller2019,moller1990}, and satisfactorily the comparison stands fairly reasonable.\par \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{Fig.2.eps} \caption{(Colour online) Comparison of calculated $\beta$-decay half-lives with quasi-particle random-phase approximation (QRPA) results \cite{moller2019,moller1990}}.\label{fig2} \end{figure} The extensive comparison of our results in Tables 3 \& 4 along with Fig. \ref{fig2}, allows us to apply the whole formalism to the other isotopes of transfermium nuclei, by which prediction can be made for the most probable decay mode and correspondingly the value of half-life. In Table 5, we have tabulated Log of half-lives for $\alpha$, $\beta^\pm$, EC, and SF-decay along with probable decay mode(s) with its half-life. From a close watch, it is noticeable that $\alpha$-decay dominates this part of periodic chart, and the most of the half-lives of $\alpha$-decay are in current experimental reach. In addition, weak-decay is also found more likely in some of the nuclei apparently, and may provide useful inputs for future experiments on the road of search of new elements and weak-decays in this superheavy region.\par \begin{center} \small \setlength{\tabcolsep}{3pt} \begin{longtable}{|c|c|c|c|c|c|c|c|} \caption{Prediction for most probable decay mode and calculated half-life for transfermium isotopes}\\ \hline \multicolumn{1}{|c}{Nucleus}& \multicolumn{1}{|c}{Log T$_\alpha$}& \multicolumn{1}{|c}{Log T$_{\beta^-}$}& \multicolumn{1}{|c}{Log T$_{\beta^+}$}& \multicolumn{1}{|c}{Log T$_{EC}$}& \multicolumn{1}{|c}{Log T$_{SF}$}& \multicolumn{1}{|c}{Decay-Mode}& \multicolumn{1}{|c|}{T$_{1/2}$}\\ \hline \endfirsthead \multicolumn{8}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \multicolumn{1}{|c}{Nucleus}& \multicolumn{1}{|c}{Log T$_\alpha$}& \multicolumn{1}{|c}{Log T$_{\beta^-}$}& \multicolumn{1}{|c}{Log T$_{\beta^+}$}& \multicolumn{1}{|c}{Log T$_{EC}$}& \multicolumn{1}{|c}{Log T$_{SF}$}& \multicolumn{1}{|c}{Decay-Mode}& \multicolumn{1}{|c|}{T$_{1/2}$}\\ \hline \endhead \hline \multicolumn{8}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot $^{235}$Md & -3.32 & - & 0.37 & 1.85 & -10.23 & SF & <10$^{-2}$ \\ $^{236}$Md & -1.77 & - & -0.46 & 1.34 & -10.30 & SF & <10$^{-2}$ \\ $^{237}$Md & -1.36 & - & 0.48 & 1.91 & -8.46 & SF & <10$^{-2}$ \\ $^{238}$Md & -0.97 & - & -0.37 & 1.38 & -7.46 & SF & <10$^{-2}$ \\ $^{239}$Md & -0.95 & - & 0.68 & 2.00 & -5.34 & SF & <10$^{-2}$ \\ $^{240}$Md & -0.25 & - & -0.09 & 1.51 & -4.59 & SF & <10$^{-2}$ \\ $^{241}$Md & -0.32 & - & 1.03 & 2.17 & -2.36 & SF & <10$^{-2}$ \\ $^{242}$Md & 0.24 & - & 0.15 & 1.63 & -1.38 & SF & 0.04s \\ $^{243}$Md & 0.02 & - & 1.29 & 2.29 & 0.82 & $\alpha$ & 1.06s \\ $^{244}$Md & 0.98 & - & 0.52 & 1.80 & 1.48 & $\beta^{+}/EC$/$\alpha$ & 3.29s \\ $^{261}$Md & 8.64 & - & - & - & 8.61 & SF/$\alpha$ & 12.88y \\ $^{262}$Md & 9.94 & 2.96& - & - & 8.49 & $\beta^{-}$ & 15.14m \\ $^{263}$Md & 10.71 & 5.24& - & - & 9.89 & $\beta^{-}$ & 48.30h \\ $^{264}$Md & 11.97 & 2.50& - & - & 8.75 & $\beta^{-}$ & 5.25m \\ $^{265}$Md & 13.34 & 4.16& - & - & 8.83 & $\beta^{-}$ & 4.02h \\ $^{266}$Md & 13.92 & 1.81& - & - & 7.52 & $\beta^{-}$ & 1.07m \\ $^{267}$Md & 15.70 & 3.41& - & - & 7.52 & $\beta^{-}$ & 43.20m \\ $^{268}$Md & 15.94 & 1.57& - & - & 6.35 & $\beta^{-}$ & 37.43s \\ \hline $^{238}$No & -2.63 & - & 0.74 & 0.82 & -10.58 & SF & <10$^{-2}$ \\ $^{239}$No & -1.60 & - & 0.85 & 0.91 & -7.92 & SF & <10$^{-2}$ \\ $^{240}$No & -1.97 & - & 0.71 & 0.79 & -8.64 & SF & <10$^{-2}$ \\ $^{241}$No & -1.63 & - & 0.82 & 0.88 & -6.05 & SF & <10$^{-2}$ \\ $^{242}$No & -1.55 & - & 0.63 & 0.73 & -6.24 & SF & <10$^{-2}$ \\ $^{243}$No & -1.63 & - & 0.77 & 0.84 & -3.21 & SF & <10$^{-2}$ \\ $^{244}$No & -1.56 & - & 0.58 & 0.68 & -3.37 & SF & <10$^{-2}$ \\ $^{245}$No & -0.50 & - & 0.69 & 0.77 & -0.37 & $\alpha$ & 0.31s \\ $^{246}$No & -0.81 & - & 0.45 & 0.58 & -0.68 & $\alpha$ & 0.16s \\ $^{247}$No & -0.56 & - & 0.61 & 0.71 & 2.29 & $\alpha$ & 0.27s \\ $^{248}$No & -0.21 & - & 0.32 & 0.49 & 1.87 & $\alpha$ & 0.61s \\ $^{249}$No & 0.26 & - & 0.52 & 0.64 & 4.90 & $\alpha$ & 1.80s \\ $^{261}$No & 6.97 & - & - & -1.53 & 7.60 & $\beta^{+}/EC$ & 0.03s \\ $^{263}$No & 8.63 & 6.63& - & - & 8.34 & $\beta^{-}$ & 48.83d \\ $^{264}$No & 8.55 & - & - & - & 7.52 & SF & 1.06y \\ $^{265}$No & 11.58 & 5.10& - & - & 9.59 & $\beta^{-}$ & 1.45d \\ $^{266}$No & 11.18 & - & - & - & 7.06 & SF & 0.99y \\ $^{267}$No & 13.18 & 4.08 & - & - & 7.97 & $\beta^{-}$ & 3.33h \\ $^{268}$No & 13.25 & - & - & - & 4.94 & SF & 24.24h \\ \hline $^{241}$Lr & -3.62 & - & 0.28 & 1.74 & -4.47 & SF & <10$^{-2}$ \\ $^{242}$Lr & -1.77 & - & -0.49 & 1.25 & -3.60 & SF & <10$^{-2}$ \\ $^{243}$Lr & -3.89 & - & 0.41 & 1.81 & -6.69 & SF & <10$^{-2}$ \\ $^{244}$Lr & -1.96 & - & -0.26 & 1.36 & -6.01 & SF & <10$^{-2}$ \\ $^{245}$Lr & -3.25 & - & 0.65 & 1.92 & -4.10 & SF & <10$^{-2}$ \\ $^{246}$Lr & -1.19 & - & -0.01 & 1.48 & -3.36 & SF & <10$^{-2}$ \\ $^{247}$Lr & -1.37 & - & 1.24 & 2.20 & -1.45 & SF & <10$^{-2}$ \\ $^{248}$Lr & -0.77 & - & 0.32 & 1.64 & -0.96 & SF & <10$^{-2}$ \\ $^{249}$Lr & -0.70 & - & 1.62 & 2.38 & 1.00 & $\alpha$ & 0.20s \\ $^{250}$Lr & 0.75 & - & 0.69 & 1.81 & 1.97 & $\beta^{+}/EC$/$\alpha$ & 4.86s \\ $^{251}$Lr & 0.38 & - & 2.06 & 2.58 & 3.90 & $\alpha$ & 2.39s \\ $^{263}$Lr & 5.34 & - & - & - & 7.28 & $\alpha$ & 60.51h \\ $^{264}$Lr & 6.25 & 4.45 & - & 4.33 & 7.74 & $\beta^{+}/EC$/$\beta^{-}$ & 5.95h \\ $^{265}$Lr & 6.65 & - & - & - & 9.65 & $\alpha$ & 51.17d \\ $^{267}$Lr & 7.86 & 5.04 & - & - & 9.19 & $\beta^{-}$ & 30.25h \\ $^{268}$Lr & 8.17 & - & - & - & 7.40 & SF & 0.79y \\ $^{269}$Lr & 8.89 & 3.77 & - & - & 7.06 & $\beta^{-}$ & 1.65h \\ $^{270}$Lr & 9.56 & 6.88 & - & - & 5.12 & SF & 36.42h \\ \hline $^{243}$Rf & -3.63 & - & -0.07 & 1.55 & -9.93 & SF & <10$^{-2}$ \\ $^{244}$Rf & -3.45 & - & 1.17 & 2.40 & -10.48 & SF & <10$^{-2}$ \\ $^{245}$Rf & -2.95 & - & 0.13 & 1.64 & -7.97 & SF & <10$^{-2}$ \\ $^{246}$Rf & -3.00 & - & 1.47 & 2.54 & -8.07 & SF & <10$^{-2}$ \\ $^{247}$Rf & -1.47 & - & 0.37 & 1.76 & -5.16 & SF & <10$^{-2}$ \\ $^{248}$Rf & -1.93 & - & 1.87 & 2.73 & -5.54 & SF & <10$^{-2}$ \\ $^{249}$Rf & -1.69 & - & 0.65 & 1.90 & -2.83 & SF & <10$^{-2}$ \\ $^{250}$Rf & -1.56 & - & 2.21 & 2.89 & -2.86 & SF & <10$^{-2}$ \\ $^{251}$Rf & -0.58 & - & 1.02 & 2.07 & 0.16 & $\alpha$ & 1.45s \\ $^{252}$Rf & -0.73 & - & 2.69 & 3.11 & -0.10 & $\alpha$ & 0.80s \\ $^{264}$Rf & 3.87 & 4.14 & - & - & 5.47 & $\alpha$ & 2.05h \\ $^{266}$Rf & 5.23 & 4.66 & - & - & 8.27 & $\beta^{-}$ & 12.63h \\ $^{267}$Rf & 6.68 & - & - & - & 10.14 & $\alpha$ & 55.17d \\ $^{268}$Rf & 6.55 & - & - & - & 7.58 & $\alpha$ & 40.77d \\ $^{269}$Rf & 7.46 & 6.65 & - & - & 7.98 & $\beta^{-}$ & 51.37d \\ $^{270}$Rf & 6.95 & - & - & - & 5.06 & SF & 31.73h \\ $^{271}$Rf & 7.65 & 4.69 & - & - & 5.62 & $\beta^{-}$ & 13.46h \\ $^{272}$Rf & 7.87 & - & - & - & 2.89 & SF & 13.08m \\ \hline $^{245}$Db & -4.76 & - & 0.05 & 1.56 & -10.57 & SF & <10$^{-2}$ \\ $^{246}$Db & -3.11 & - & -0.68 & 1.08 & -10.43 & SF & <10$^{-2}$ \\ $^{247}$Db & -3.37 & - & 0.17 & 1.62 & -8.29 & SF & <10$^{-2}$ \\ $^{248}$Db & -2.11 & - & -0.45 & 1.20 & -7.97 & SF & <10$^{-2}$ \\ $^{249}$Db & -2.65 & - & 0.53 & 1.79 & -6.06 & SF & <10$^{-2}$ \\ $^{250}$Db & -1.88 & - & -0.24 & 1.30 & -5.62 & SF & <10$^{-2}$ \\ $^{251}$Db & -3.03 & - & 0.80 & 1.92 & -3.22 & SF & <10$^{-2}$ \\ $^{252}$Db & -1.29 & - & 0.04 & 1.44 & -2.42 & SF & <10$^{-2}$ \\ $^{253}$Db & -2.09 & - & 1.15 & 2.09 & -0.32 & $\alpha$ & 0.48s \\ $^{254}$Db & -0.75 & - & 0.38 & 1.60 & 0.58 & $\alpha$ & 3.77s \\ $^{264}$Db & 3.94 & - & 3.40 & 2.97 & 5.75 & $\beta^{+}/EC$& 15.53m \\ $^{265}$Db & 3.32 & - & 8.46 & 4.49 & 7.69 & $\alpha$ & 35.06m \\ $^{266}$Db & 4.27 & - & 3.92 & 3.19 & 8.71 & $\beta^{+}/EC$& 25.78m \\ $^{269}$Db & 5.70 & - & - & - & 9.66 & $\alpha$ & 5.84d \\ $^{271}$Db & 6.36 & - & - & - & 7.02 & $\alpha$ & 26.54d \\ $^{272}$Db & 6.36 & 3.52 & - & 4.86 & 5.11 & $\beta^{-}$ & 54.88m \\ \hline $^{248}$Sg & -3.89 & - & 0.94 & 2.21 & -11.88 & SF & <10$^{-2}$ \\ $^{249}$Sg & -2.54 & - & 0.00 & 1.50 & -9.38 & SF & <10$^{-2}$ \\ $^{250}$Sg & -3.14 & - & 1.32 & 2.39 & -9.89 & SF & <10$^{-2}$ \\ $^{251}$Sg & -2.97 & - & 0.23 & 1.61 & -6.71 & SF & <10$^{-2}$ \\ $^{252}$Sg & -2.93 & - & 1.61 & 2.53 & -6.77 & SF & <10$^{-2}$ \\ $^{253}$Sg & -2.05 & - & 0.50 & 1.74 & -3.74 & SF & <10$^{-2}$ \\ $^{254}$Sg & -2.16 & - & 1.98 & 2.70 & -3.81 & SF & <10$^{-2}$ \\ $^{255}$Sg & -1.33 & - & 0.85 & 1.91 & -0.78 & $\alpha$ & 0.05s \\ $^{256}$Sg & -1.51 & - & 2.39 & 2.89 & -0.75 & $\alpha$ & 0.03s \\ $^{257}$Sg & -0.65 & - & 1.19 & 2.07 & 2.25 & $\alpha$ & 0.22s \\ $^{267}$Sg & 3.11 & - & 5.00 & 3.73 & 9.18 & $\alpha$ & 21.24m \\ $^{268}$Sg & 3.29 & - & - & - & 8.61 & $\alpha$ & 32.78m \\ $^{270}$Sg & 4.30 & - & - & - & 7.98 & $\alpha$ & 5.58h \\ $^{272}$Sg & 4.60 & - & - & - & 5.31 & $\alpha$ & 11.09h \\ $^{273}$Sg & 4.91 & - & - & - & 5.61 & $\alpha$ & 22.63h \\ $^{274}$Sg & 5.05 & - & - & - & 2.62 & SF & 6.87m \\ $^{275}$Sg & 5.19 & 5.27 & - & - & 3.57 & SF & 61.37m \\ $^{276}$Sg & 5.13 & - & - & - & 1.27 & SF & 18.54s \\ \hline $^{250}$Bh & -2.80 & - & -0.81 & 0.95 & -12.09 & SF & <10$^{-2}$ \\ $^{251}$Bh & -3.99 & - & 0.07 & 1.50 & -9.62 & SF & <10$^{-2}$ \\ $^{252}$Bh & -2.93 & - & -0.63 & 1.04 & -8.93 & SF & <10$^{-2}$ \\ $^{253}$Bh & -3.73 & - & 0.30 & 1.61 & -6.87 & SF & <10$^{-2}$ \\ $^{254}$Bh & -2.04 & - & -0.40 & 1.15 & -6.03 & SF & <10$^{-2}$ \\ $^{255}$Bh & -2.80 & - & 0.58 & 1.75 & -4.04 & SF & <10$^{-2}$ \\ $^{256}$Bh & -1.41 & - & -0.12 & 1.28 & -3.26 & SF & <10$^{-2}$ \\ $^{257}$Bh & -2.26 & - & 0.88 & 1.89 & -0.94 & $\alpha$ & 0.01s \\ $^{258}$Bh & -0.62 & - & 0.16 & 1.42 & -0.22 & $\alpha$ & 0.24s \\ $^{259}$Bh & -0.83 & - & 1.25 & 2.06 & 1.49 & $\alpha$ & 0.15s \\ $^{273}$Bh & 4.02 & - & - & - & 7.15 & $\alpha$ & 2.92h \\ $^{275}$Bh & 4.25 & - & - & - & 4.15 & SF/$\alpha$ & 3.88h \\ $^{276}$Bh & 4.83 & 5.09 & 7.12 & 4.14 & 3.12 & SF & 21.91m \\ $^{277}$Bh & 4.65 & - & - & - & 3.02 & SF & 17.52m \\ $^{278}$Bh & 5.23 & 4.07 & - & 4.97 & 2.63 & SF & 7.05m \\ \hline $^{253}$Hs & -3.74 & - & -0.17 & 1.34 & -10.25 & SF & <10$^{-2}$ \\ $^{254}$Hs & -3.87 & - & 1.07 & 2.20 & -10.38 & SF & <10$^{-2}$ \\ $^{255}$Hs & -2.66 & - & 0.09 & 1.47 & -7.50 & SF & <10$^{-2}$ \\ $^{256}$Hs & -3.16 & - & 1.35 & 2.33 & -7.64 & SF & <10$^{-2}$ \\ $^{257}$Hs & -2.39 & - & 0.32 & 1.58 & -4.83 & SF & <10$^{-2}$ \\ $^{258}$Hs & -2.56 & - & 1.63 & 2.47 & -4.61 & SF & <10$^{-2}$ \\ $^{259}$Hs & -1.73 & - & 0.53 & 1.68 & -1.72 & SF/$\alpha$ & 0.02s \\ $^{260}$Hs & -1.45 & - & 1.96 & 2.62 & -1.68 & SF & 0.02s \\ $^{261}$Hs & -0.28 & - & 0.99 & 1.90 & 0.85 & $\alpha$ & 0.53s \\ $^{262}$Hs & -1.06 & - & 2.47 & 2.86 & 0.22 & $\alpha$ & 0.09s \\ $^{271}$Hs & 2.56 & - & 3.83 & 3.19 & 8.86 & $\alpha$ & 5.99m \\ $^{272}$Hs & 2.00 & - & - & 5.31 & 6.98 & $\alpha$ & 1.68m \\ $^{274}$Hs & 1.81 & - & - & - & 4.77 & $\alpha$ & 63.86s \\ $^{276}$Hs & 1.93 & - & - & - & 2.28 & $\alpha$ & 85.50s \\ $^{278}$Hs & 2.06 & - & - & - & 2.71 & $\alpha$ & 1.92m \\ $^{279}$Hs & 3.61 & - & - & 5.66 & 3.45 & SF/$\alpha$ & 47.37m \\ $^{280}$Hs & 2.96 & - & - & - & 2.47 & SF/$\alpha$ & 4.87m \\ $^{281}$Hs & 4.18 & - & - & - & 5.42 & $\alpha$ & 4.19h \\ $^{282}$Hs & 4.45 & - & - & - & 3.79 & SF & 1.70h \\ \hline $^{255}$Mt & -4.31 & - & -0.10 & 1.34 & -10.76 & SF & <10$^{-2}$ \\ $^{256}$Mt & -2.35 & - & -0.69 & 0.93 & -10.21 & SF & <10$^{-2}$ \\ $^{257}$Mt & -3.82 & - & 0.14 & 1.46 & -7.99 & SF & <10$^{-2}$ \\ $^{258}$Mt & -2.21 & - & -0.52 & 1.01 & -7.51 & SF & <10$^{-2}$ \\ $^{259}$Mt & -3.10 & - & 0.41 & 1.59 & -4.80 & SF & <10$^{-2}$ \\ $^{260}$Mt & -1.65 & - & -0.32 & 1.11 & -4.03 & SF & <10$^{-2}$ \\ $^{261}$Mt & -2.25 & - & 0.59 & 1.67 & -1.70 & $\alpha$ &0.01s \\ $^{262}$Mt & -0.89 & - & 0.00 & 1.27 & -1.20 & SF &0.06s \\ $^{263}$Mt & -2.51 & - & 0.89 & 1.82 & 0.39 & $\alpha$ & <10$^{-2}$ \\ $^{264}$Mt & -1.58 & - & 0.21 & 1.37 & 0.50 & $\alpha$ & 0.03s \\ $^{265}$Mt & -2.67 & - & 1.28 & 2.00 & 1.35 & $\alpha$ & <10$^{-2}$ \\ $^{269}$Mt & -1.41 & - & 2.05 & 2.36 & 4.71 & $\alpha$ & 0.04s \\ $^{271}$Mt & -0.75 & - & 2.67 & 2.65 & 7.61 & $\alpha$ & 0.18s \\ $^{272}$Mt & 0.35 & - & 1.79 & 2.11 & 7.71 & $\alpha$ & 2.25s \\ $^{273}$Mt & -0.16 & - & 3.68 & 3.09 & 7.85 & $\alpha$ & 0.69s \\ $^{279}$Mt & 0.54 & - & - & 6.05 & 2.64 & $\alpha$ & 3.47s \\ $^{280}$Mt & 1.04 & 6.32 & 3.88 & 3.04 & 2.65 & $\alpha$ & 11.03s \\ $^{281}$Mt & 0.78 & - & - & - & 3.65 & $\alpha$ & 6.01s \\ $^{282}$Mt & 3.62 & 6.61 & 6.10 & 3.87 & 4.17 & $\alpha$ & 1.16h \\ $^{283}$Mt & 4.37 & - & 5.24 & - & 5.44 & $\alpha$ & 6.45h \\ $^{284}$Mt & 6.58 & 4.95 & - & 4.94 & 6.34 & $\beta^{+}/EC$/$\beta^{-}$& 24.36h \\ \hline \end{longtable} \end{center} \normalsize \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{Fig.3.eps} \caption{(Colour online) Chart of considered nuclei with their probable decay modes.}.\label{fig3} \end{figure} All the prediction are summarized in form of a nuclear chart which is shown in Fig. \ref{fig3}. Dominant decay modes are shown by different colours for the considered nuclei. The shaded blocks correspond to decay modes which are known experimentally \cite{nndc}. \section{Conclusions} Decay modes are studied for odd and even transfermium nuclei in the range of $^{235-268}$Md, $^{238-268}$No, $^{241-270}$Lr, $^{243-272}$Rf, $^{245-272}$Db, $^{248-276}$Sg, $^{250-278}$Bh, $^{253-282}$Hs, and $^{255-284}$Mt. For all these nuclei, the calculations are done using relativistic mean-field theory (RMF) which are found in an excellent match with available experimental data and also with Hartree-Fock-Bogoliubov (HFB) mass model with HFB-24 functional, relativistic continuum Hartree-Bogoliubov (RCHB) theory with the relativistic density functional PC-PK1, nuclear mass table with the global mass formula WS4, and, recently reported Finite Range Droplet Model (FRDM) calculations. A comparison among $\alpha$, $\beta^\pm$, electronic capture decays, and spontaneous fission is demonstrated which leads to the most probable decay mode along with its half-life. As an important consequence, in spite of the fact that the $\alpha$-decay and SF modes are found to dominate, chances of weak-decay modes can not be ignored in considered transfermium nuclei. Indeed, we have found several isotopes in which weak-decay mode is quite comparable or sometimes more probable than $\alpha$-decay. The half-lives for weak-decay mode are found in accord with the half-lives calculated by using quasi-particle random-phase approximation (QRPA). These important findings suggest a new path to locate the gap between the nuclei synthesized by cold and hot reactions. \section{Acknowledgement} The authors take great pleasure in thanking the referee for his several suggestions and comments which helped to improve the manuscript. G. Saxena acknowledges the support provided by SERB (DST), Govt. of India under CRG/2019/001851.
proofpile-arXiv_059-15734
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Shisen-Sho and Mahjong Solitaire are games in which the object is to remove tiles in a certain way. The tiles come in groups of four equals. The game is prepaired by stacking the tiles randomly in a certain layout, which may be threedimensional in the case of Mahjong Solitaire. The object is to remove the tiles in pairs of equal tiles. For Shisen-Sho and Mahjong Solitaire, the conditions under which a pair of equal tiles may be removed are different. For Shisen-Sho, a pair of equal tiles may be removed if they are either adjacent, or they can be connected by at most $3$ horizontal and vertical free lines. Here, a line is free if it does not cross or edge a tile. For Mahjong Solitaire, a pair of equal tiles may be removed if they are both free. A tile is free if there are no adjacent tiles above it, and neither on either the left or the right side. Mahjong Solitaire is usually played with the 144 tiles of the Mahjong game, where both the $4$ season tiles and the $4$ flower tiles are seen as a group of $4$ equal tiles. Various layout shapes are used. Shisen-Sho is usually played with a rectangular layout. A pair of tiles which can be removed is called a playable pair. One way to play the game is by randomly removing playable pairs, hoping that one will not get stuck. But if one gets stuck, one still does not know if all tiles could have be removed. To determine if all tiles could be removed or not, an exhaustive search is needed in general. An efficient way to do this for Mahjong Solitaire can be found in my paper \cite{dB}. For Shisen-Sho, we essentially use the same algorithm, but with the test for playable pairs replaced. There is however one point of attention. In the algorithm of \cite{dB}, there are scans used for pruning, in which the third and fourth tile of several groups may be removed individually as soon as they are free. For Mahjong Solitaire, this comes down to that any tile of such a group may be removed, as soon as it forms a playable pair with another tile of the group: a tile that may or may not have been removed already. But for Shisen-Sho, the latter is more restrictive, whence it leads to better pruning. So we use the last interpretation when we make a Shisen-Sho version of the algorithm. But first, we prove that Shisen-Sho is NP-complete. More precisely, we prove that determining if all tiles can be removed in a Shisen-Sho board with a rectangular layout is NP-complete. Next, we show that determining if two equal tiles form a playable pair can be determined in logarithmic time, under the assumption that we have bitwise operations on registers with at least as many bits as the layout dimensions. After that, we extend the rules of the Shisen-Sho game to the Mahjong Solitaire grid, and adapt the algorithm to determine if two equal tiles form a playable pair to this grid. At the end, we present some graphs to indicate how many random boards are winnable (i.e.\@ all tiles can be removed) for several layouts, for Shisen-Sho, Mahjong Solitaire, and transposed Mahjong Solitaire. With transposed Mahjong Solitaire, the layout is transposed, or equivalently, left and right are replaced by front and rear in the rules of Mahjong Solitaire. \section{Complexity of Shisen-Sho} In \cite{IWM}, the following theorem has been proved already, but the proof is longer than the proof below. The refinement of having only $5$ rows seems a new result. \begin{theorem} Shisen-Sho is NP-complete. \end{theorem} \begin{proof} We reduce from Mahjong Solitaire with peeking with isolated stacks. The NP-completeness of that is proved under the tag Shanghai in \cite{Ep}. Let $a$ be a tile of a Mahjong Solitaire stack. We turn $a$ in a supertile of size $5 \times 5$ for Shisen-Sho by adding $6$ extra groups. \begin{center} \begin{tikzpicture}[x=5mm,y=5mm] \fill[black!10] (0,0) rectangle (5,5); \fill[black!25] (0,0) rectangle (2,4) (3,0) rectangle (5,4); \fill[white] (2,2) rectangle (3,3); \draw[black!50] \foreach \x in {1,...,4} { (\x,0) -- (\x,5) }; \draw[black!50] \foreach \y in {1,...,4} { (0,\y) -- (5,\y) }; \draw (0,0) rectangle (5,5) (0,0) rectangle (2,2) (3,0) rectangle (5,2); \draw (0,2) rectangle (2,4) (2,2) rectangle (3,3) (3,2) rectangle (5,4); \draw \foreach[count=\n] \v in {z,z,z,z',z'} { (\n-1,4.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {x,x',z',y',y} { (\n-1,3.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {x',x,a,y,y'} { (\n-1,2.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {y',y,z,x,x'} { (\n-1,1.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {y,y',z',x',x} { (\n-1,0.5) node[anchor=west] {$\strut\v$} }; \end{tikzpicture} \qquad \begin{tikzpicture}[x=5mm,y=5mm] \fill[black!10] (0,0) rectangle (5,5); \fill[black!25] (0,0) rectangle (2,4) (3,0) rectangle (5,4); \draw[black!50] \foreach \x in {1,...,4} { (\x,0) -- (\x,5) }; \draw[black!50] \foreach \y in {1,...,4} { (0,\y) -- (5,\y) }; \fill[white] (2,2) rectangle (3,5) (1,4) rectangle (4,5); \draw (0,4) rectangle (1,5) (4,4) rectangle (5,5); \draw (2,0) -- (3,0) (0,0) rectangle (2,2) (3,0) rectangle (5,2); \draw (0,2) rectangle (2,4) (2,2) rectangle (3,3) (3,2) rectangle (5,4); \draw \foreach[count=\n] \v in {z,,,,z'} { (\n-1,4.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {x,x',,y',y} { (\n-1,3.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {x',x,a,y,y'} { (\n-1,2.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {y',y,z,x,x'} { (\n-1,1.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {y,y',z',x',x} { (\n-1,0.5) node[anchor=west] {$\strut\v$} }; \end{tikzpicture} \end{center} In order to remove one of the $x$, $x'$, $y$ and $y'$-tiles, the $a$-tile needs be removed. Hence the $a$-tile needs to be removed to free the $z$-tile and $z'$-tile below it as well. So the $z$-tile and $z'$ above the $a$-tile need to be removed to free the $a$-tile. This can be done (in more ways if there is no supertile on top of the supertile of $a$), after which the $a$-tile has a connection to above. But the connection is blocked if there is another supertile on top of the supertile of $a$. If tile $a$ can be removed somehow, then the whole supertile of $a$ can be freed. So by stacking supertiles, we can emulate Mahjong Solitaire with peeking with isolated stacks in Shisen-Sho. The stacks of supertiles can be filled up to a rectangle, by adding tile groups in literal order. This is because those tiles can be played in the same order in accordance with the rules of Shisen-Sho. \end{proof} \begin{theorem} Shisen-Sho with $5$ rows is NP-complete. \end{theorem} \begin{proof} We reduce from Mahjong Solitaire with peeking with isolated stacks of the form $aab$ and $abb$. The NP-completeness of that is proved in \S 2 of \cite{dB}. We turn stacks $aab$ and $abb$ as follows into tileblocks. \begin{center} \begin{tikzpicture}[x=5mm,y=5mm] \fill[black!10] (0,0) rectangle (5,5); \fill[black!25] (0,0) rectangle (2,4) (3,0) rectangle (5,4); \draw[black!50] \foreach \x in {1,...,4} { (\x,0) -- (\x,5) }; \draw[black!50] \foreach \y in {1,...,4} { (0,\y) -- (5,\y) }; \draw[fill=white] (2,1) rectangle (3,2) (2,2) rectangle (3,3) (2,4) rectangle (3,5); \draw (0,0) rectangle (5,5) (0,0) rectangle (2,2) (3,0) rectangle (5,2); \draw (0,2) rectangle (2,4) (2,2) rectangle (3,3) (3,2) rectangle (5,4); \draw \foreach[count=\n] \v in {z,z',a,z',z} { (\n-1,4.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {x,x',z,y',y} { (\n-1,3.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {x',x,a,y,y'} { (\n-1,2.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {y',y,b,x,x'} { (\n-1,1.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {y,y',z,x',x} { (\n-1,0.5) node[anchor=west] {$\strut\v$} }; \end{tikzpicture} \qquad \begin{tikzpicture}[x=5mm,y=5mm] \fill[black!10] (0,0) rectangle (5,5); \fill[black!25] (0,0) rectangle (2,4) (3,0) rectangle (5,4); \draw[black!50] \foreach \x in {1,...,4} { (\x,0) -- (\x,5) }; \draw[black!50] \foreach \y in {1,...,4} { (0,\y) -- (5,\y) }; \draw[fill=white] (2,1) rectangle (3,2) (2,3) rectangle (3,4) (2,4) rectangle (3,5); \draw (0,0) rectangle (5,5) (0,0) rectangle (2,2) (3,0) rectangle (5,2); \draw (0,2) rectangle (2,4) (2,2) rectangle (3,3) (3,2) rectangle (5,4); \draw \foreach[count=\n] \v in {z,z',a,z',z} { (\n-1,4.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {x,x',b,y',y} { (\n-1,3.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {x',x,z,y,y'} { (\n-1,2.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {y',y,b,x,x'} { (\n-1,1.5) node[anchor=west] {$\strut\v$} }; \draw \foreach[count=\n] \v in {y,y',z,x',x} { (\n-1,0.5) node[anchor=west] {$\strut\v$} }; \end{tikzpicture} \end{center} The tiles with $z'$ are for filling up, and are shared among two tileblocks. \end{proof} \section{Testing if two tiles are a matching pair} Suppose the width and height of the board are estimated by $n$. Let $d$ be the distance between the two tiles at hand. \begin{proposition} We can test if two tiles are a matching pair in $\Omicron(n \cdot d)$ steps. \end{proposition} \begin{proof} We only need to study connections with the first and third line horizontal and the second line vertical, since the other type of three-line connection is similar, and connections with fewer lines are degenerate cases of three-line connections. First, we compute how far we can reach horizontally from both tiles. This can be done in $\Omicron(n)$ steps. For each column that can be reached horizontally by both tiles, we check if it is free between the two tiles. This can be done in $\Omicron(d)$ steps for each column. \end{proof} We can improve the bound of $\Omicron(n \cdot d)$ drastically, if we assume that we have registers with $n$ bits. We assume that we can perform bitwise logic on registers with $n$ bits, including zero testing. Furthermore, we assume that we can set the first $k$ bits and clear the last $n-k$ bits of a register with $n$ bits. The value of $k$ comes from a smaller register which can hold the values $0,1,2,\ldots,n$, i.e.\@ a register with more than $\log n$ bits. We assume that we can do everything on the small registers which we can do on the large registers with $n$ bits. Furthermore, we assume that we can do either addition or subtraction (hence both) on the small registers. Our memory usage is $\Omicron(n)$ registers. \begin{theorem} \label{notree} Testing if two tiles form a matching pair can be performed in $\Omicron(\log n + d)$ steps. With bit scan techniques available on the large registers, only $\Omicron(d)$ steps are needed. \end{theorem} \begin{proof} Again, we only need to study connections with the first and third line horizontal and the second line vertical. We assume that we have registers with tile population vectors for each row. Computing how far one can reach horizontally from a tile can be done with binary search in $\Omicron(\log n)$ steps. With bit scan techniques, $\Omicron(1)$ steps are sufficient. Finding a free column in the common horizontal range can be done in parallel, by computing the bitwise disjunction (bitwise or) of the rows between both tiles. This takes $\Omicron(d)$ steps. \end{proof} In a practical algorithm where 64 bit integers are used as large registers, computing how far one can reach horizontally from a tile will be done differently, because subtraction can be used as well. In our algorithm, we first remove the tile at hand from the population vector. Next, we compute how far one can reach horizontally from the position of the left side of the tile, using the following function. \begin{lstlisting}[language=C++,basicstyle=\ttfamily\footnotesize, backgroundcolor=\color{black!10},frame=shadowbox, rulecolor=\color{black!50},rulesepcolor=\color{black!25},xrightmargin=2pt] // optional speedup #define bitscanreverse64(g) (63 ^ __builtin_clzl(g)) // p is the position to scan from for consecutive cleared bits in f // p must be in the interval [0..63], optional to allow 64 as well // consecutive cleared bits found are returned as set bits static unsigned long fillrange (unsigned long f, int p) { unsigned long g = f; // /* if (0 <= p <= 63) */ g = bits of f below position p /* if (!(p & ~63)) */ g &= ((1lu << p) - 1); if (g == 0) return ~f & (f - 1); // g = smallest power of 2 exceeding g #ifdef bitscanreverse64 g = 2lu << bitscanreverse64 (g); #else g |= g >> 1; g |= g >> 2; g |= g >> 4; g |= g >> 8; g |= g >> 16; g |= g >> 32; g++; #endif return ~f & (f - g); } \end{lstlisting} Notice that forward bit scanning is not necessary, because subtraction can be used. If reverse bit scanning is not available, then binary search is still not used, because it takes conditional jumps, which is not preferable on modern CPUs. Just as binary search, the actual replacement technique for reverse bit scanning takes $\Omicron(\log n)$ steps. If one replaces {\ttfamily\^{ }} by {\ttfamily-} in the macro {\ttfamily bitscanreverse64}, then gcc will not optimize it to the corresponding Intel/AMD x64 instruction. \begin{theorem} If we allow playing and unplaying a tile to take $\Omicron(\log n)$ steps as a gambit, then testing if two tiles form a matching pair can be done in $\Omicron(\log n)$ steps. With bit scan techniques available on all registers, this improves to $\Omicron(\log d)$ steps. \end{theorem} \begin{proof} We improve the computation of the bitwise disjunction (bitwise or) in the proof of theorem \ref{notree} from $\Omicron(d)$ to $\Omicron(\log d)$. Suppose that the rows are indexed $0,1,\ldots,n-1$. We embed the row indexes in the odd numbers of the range $1,2,\ldots,2n-1$, by sending row index $k$ to $2k+1$. We see the rows as leaves of a perfect tree. The even numbers in the range $1,2,\ldots,2n-1$ are used for the interior nodes of the tree, as depicted below. \begin{center} \begin{tikzpicture}[x=10.5pt,y=10.5pt] \tikzstyle{nodestyle}=[circle,draw,fill=black!10,inner sep=0pt,minimum size=4mm] \draw (32,5) node[nodestyle] (32) {$\scriptstyle 32$}; \draw (16,4) node[nodestyle] (16) {$\scriptstyle 16$} (16) -- (32); \foreach \n/\m in {8/16,24/16} { \draw (\n,3) node[nodestyle] (\n) {$\scriptstyle \n$} (\n) -- (\m); } \foreach \n/\m in {4/8,12/8,20/24,28/24} { \draw (\n,2) node[nodestyle] (\n) {$\scriptstyle \n$} (\n) -- (\m); } \foreach \n/\m in {2/4,6/4,10/12,14/12,18/20,22/20,26/28,30/28} { \draw (\n,1) node[nodestyle] (\n) {$\scriptstyle \n$} (\n) -- (\m); } \foreach \n/\m in {1/2,3/2,5/6,7/6,9/10,11/10,13/14,15/14, 17/18,19/18,21/22,23/22,25/26,27/26,29/30,31/30} { \draw (\n,0) node[nodestyle,fill=black!25] (\n) {$\scriptstyle \n$} (\n) -- (\m); } \end{tikzpicture} \end{center} The data of the leaves are the population vectors of the corresponding rows. The data of the internal nodes are the bitwise disjunction of the data of the child nodes, i.e.\@ the bitwise disjunction of all population vectors of its descendant leaves. This can indeed be maintained in $\Omicron(\log n)$ steps when a tile is played or unplayed, see also the code below. Suppose that we have tiles in rows $i-1$ and $j$ such that $i-1 < j$. Then we must compute the bitwise disjunction of rows $i,i+1,\ldots,j-1$, i.e. the bitwise disjunction of the leaves between $2i$ and $2j$. If $i < j$, then the index of the last common ancestor of leaves $2i + 1$ and $2j + 1$ lies between $2i + 1$ and $2j + 1$. By taking $k$ such that $2k$ is the index of this common ancestor if $i < j$, and by taking $k = i$ if $i = j$, we obtain that $2k$ lies between $2i - 1$ and $2j + 1$, and is the index of a common ancestor of leaves $2i + 1$ and $2j - 1$ if $i < j$, which we assume from now on. The computation of $k$ such that $2k$ is the last common ancestor of leaves $2i+1$ and $2j+1$, where $i < j$, can be done in $\Omicron(1)$ steps with reverse bit scanning on small registers, see the code below. Without bit scan techniques, we can use binary search to replace reverse bit scanning, just as for the large registers in the computation of the horizontal range, and $\Omicron(\log \log n)$ steps suffice. We can compute the bitwise disjunction of the leaves between $2i$ and $2k$ in $\Omicron(\log d)$ steps as follows. Suppose first that both $i$ and $k$ are odd. Then node $2k$ is the parent node of node $2i+1$, and either $k = i$ or $k = i+1$. So $k = i$ and the result is $0$. Suppose next that either $i$ or $k$ is odd, but not both. Then node $2i + 2$ the last ancestor of node $2i+3 = 2(i+1) + 1$ which is smaller than $2i+3$, and $i + 1 \le k$. Furthermore, node $2i + 2$ is the last ancestor of node $2i+1$ which is larger than $2i+1$, and node $2k$ is any such ancestor, so node $2k$ is an ancestor of or just equal to node $2i + 2$. Consequently, node $2k$ is an ancestor of node $2(i+1) + 1$. We can obtain the bitwise disjunction of the leaves between $2(i+1)$ and $2k$ recursively. The result is the bitwise disjunction of that and the data of node $2i+1$. Suppose finally that both $i$ and $k$ are even. Then node $2i + 2 = 2\big(2(i/2)+1\big)$ is the parent of node $2i + 1$. So node $2k = 2\big(2(k/2)\big)$ is an ancestor of node $2\big(2(i/2)+1\big)$. First, we remove the leaves of the tree and keep the internal nodes with their even numbers. Next, we divide the node numbers by $2$, to obtain a new tree of the same type as the one we started with. The result is the bitwise disjunction of the new leaves between $2(i/2)$ and $2(k/2)$ of the new tree, which we can obtain recursively. The bitwise disjunction of the leaves between $2k$ and $2j$ can be computed similarly. \end{proof} Below is the code of a practical algorithm where 64 bit integers are used as large registers, for updating the bitwise disjunction tree after playing or unplaying a tile. \begin{lstlisting}[language=C++,basicstyle=\ttfamily\footnotesize, backgroundcolor=\color{black!10},frame=shadowbox, rulecolor=\color{black!50},rulesepcolor=\color{black!25},xrightmargin=2pt] // updates fill[2], fill[4], ..., fill[126] as // interlacing ortree of fill[1], fill[3], ..., fill[127] // after changing fill[p] for odd p between 0 and 128 // ______________________________32 // ______________16______________ // ______08______ ______24______ // __04__ __12__ __20__ __28__ // 02 06 10 14 18 22 26 30 // 01 03 05 07 09 11 13 15 17 19 21 23 25 27 29 31 static void ortree_update (unsigned long *fill, int p) { p &= -4; fill[p+2] = fill[p+1] | fill[p+3]; p &= -8; fill[p+4] = fill[p+2] | fill[p+6]; p &= -16; fill[p+8] = fill[p+4] | fill[p+12]; p &= -32; fill[p+16] = fill[p+8] | fill[p+24]; // the below can be skipped because fill[1] and fill[127] // are never taken as or-operand in applications of ortree_or /* p &= 64; fill[p+32] = fill[p+16] | fill[p+48]; p = 0; fill[p+64] = fill[p+32] | fill[p+96]; */ } \end{lstlisting} Below is the code of a practical algorithm where 64 bit integers are used as large registers, for the actual computation of the bitwise disjunctions. \begin{lstlisting}[language=C++,basicstyle=\ttfamily\footnotesize, backgroundcolor=\color{black!10},frame=shadowbox, rulecolor=\color{black!50},rulesepcolor=\color{black!25},xrightmargin=2pt] // returns fill[2*p1+1] | fill[2*p1+3] | ... | fill[2*p2-1] // assuming that fill[2], fill[4], ..., fill[126] is // interlacing ortree of fill[1], fill[3], ..., fill[127] // ______________________________32 // ______________16______________ // ______08______ ______24______ // __04__ __12__ __20__ __28__ // 02 06 10 14 18 22 26 30 // 01 03 05 07 09 11 13 15 17 19 21 23 25 27 29 31 static unsigned long ortree_or_x2 (unsigned long *fill, int p1, int p2) { // negation of leading bit except value -1 for 0 static int negationofleadingbit[128] = { -1, -1, -2, -2, -4, -4, -4, -4, -8, -8, -8, -8, -8, -8, -8, -8, -16,-16,-16,-16,-16,-16,-16,-16,-16,-16,-16,-16,-16,-16,-16,-16, -32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32, -32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32,-32, -64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64, -64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64, -64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64, -64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64,-64 }; // if p1 == p2, then p1 == p == p2 // if p1 < p2, then p1 + 1 <= p <= p2, and if we see tree as family // tree, then p is the last common ancestor of p1 + 1 and p2, and // 2 * p is the last common ancestor of 2 * p1 + 1 and 2 * p2 + 1 int p = p2 & negationofleadingbit[ p1 ^ p2 ]; unsigned long f = 0; unsigned long *fill_2p = fill + 2 * p; // f |= fill[2*p1+1] | fill[2*p1+3] | ... | fill[2*p-1] p1 -= p; // f |= fill_2p[2*p1+1] | fill_2p[2*p1+3] | ... | fill_2p[-1] while (p1) { // s = trailing bit of p1 int s = p1 & -p1; f |= fill_2p[ 2 * p1 + s ]; p1 += s; } // f |= fill[2*p+1] | fill[2*p+3] | ... | fill[2*p2-1] p2 -= p; // f |= fill_2p[1] | fill_2p[3] | ... | fill_2p[2*p2-1] while (p2) { // s = trailing bit of p2 int s = p2 & -p2; p2 -= s; f |= fill_2p[ 2 * p2 + s ]; } return f; } \end{lstlisting} In the computation of $k$, the usage of bit scan reverse is replaced by a lookup table. Our theoretical model allows such a table as well, because it consists of $\Omicron(n)$ small registers. The initialization of the table can be done in $\Omicron(n)$ steps. There are a few low level optimizations in the code. The most important of them is that the array pointer is translated to make computations easier. The loops are controlled by zero testing on variables, so no extra comparisons are required. One might think that the array index computations in the loops take $3$ instructions on Intel/AMD x64 machines, namely either a move, a shift (or multiplication) and an addition, or a move and $2$ additions. But only one instruction is needed, namely a `load effective address' instruction. \section{The Mahjong grid} Mahjong Solitaire has a more complex tile positioning system. First, tiles have three coordinates, namely row, column, and level. Second, the row and column coordinates are a multiple of $\frac12$ instead of $1$. The latter can also be seen as a property of the tiles themselves, namely that their dimensions are $2 \times 2 \times 1$. \subsection{Shisen-Sho in three dimensions} We extend the Shisen-Sho game to three dimensions as follows. If there are other tiles above a given tile, then that tile cannot be played. So the pillar above the tile to be played must be free. If we have two tiles on the same level with no tiles above them, then they can be paired if they can paired within their level. If we have two tiles on the same level with no tiles above them, then they can be paired if they connect with at most $2$ horizontal and vertical lines in the highest level. The pillar line above the lowest tile can be seen as the third line. If we would allow $2$ of the $3$ lines to be pillar lines, then tiles on the same level may be pairable in the threedimensional grid without being pairable within their level. The result of that would be that the threedimensional game is not an extension of the twodimensional game, which is not what we want. So we only allow only $1$ of the $3$ lines to be a pillar line. If tile population is descending in every pillar, then tiles on different levels can be connected by three grid lines of which one pillar line, if and only if this can be done in the above way, i.e.\@ by starting with a pillar line from the lowest tile. The xmahjongg layouts indeed have the property that tile population is descending in every pillar during game play. \subsection{Pair matching tests for the Mahjong grid} Besides tile population vectors for the rows and columns, tile population vectors for the pillars are used. We use 16 bit integers for these vectors, so that 16 levels can be distinguished. The tile population vectors for the pillars are in agreement with the real situation, so every tile contributes to $4$ tile population vectors for the pillars. But the tile population vectors for the rows and columns are not in agreement with the real situation any more. If a tile is unplayed, then $2$ consecutive bits are set in only $1$ row instead of $2$, and only $1$ column instead of $2$. So the real tile population vector for row $i$ is the bitwise disjunction of the used tile population vectors for rows $i$ and $i + 1$. The same holds for the columns. The horizontal range of a tile is the bitwise disjunction of the horizontal range of the tile in its $2$ rows, as real tile population vectors. The actual computation of it in our code is as follows. \begin{lstlisting}[language=C++,basicstyle=\ttfamily\footnotesize, backgroundcolor=\color{black!10},frame=shadowbox, rulecolor=\color{black!50},rulesepcolor=\color{black!25},xrightmargin=2pt] // row above tile unsigned long f1 = rowfill[lev][r-2]; // row of tile without tile itself unsigned long f2 = rowfill[lev][r] & ~(3lu << col); // row below tile unsigned long f3 = rowfill[lev][r+2]; f1 |= f2; f3 |= f2; if (f1 == f2 || f2 == f3) { rowfillrange = fillrange(f2,col); } else { rowfillrange = fillrange(f1,col) | fillrange(f3,col); } \end{lstlisting} Since the tile appears in only one tile population vector for the rows, it only needs to be removed from one such vector. The result of this is {\ttfamily f2}. {\ttfamily f1} and {\ttfamily f3} are tile population vectors for real rows, again with the tile at hand removed. The indexes {\ttfamily r-2} and {\ttfamily r+2} (instead of {\ttfamily r-1} and {\ttfamily r+1}) are because of the interlacing internal nodes of the bitwise disjunction tree. There is some testing to ensure that is some cases where this is possible, only one horizontal range computation is performed. One such case is where the Mahjong grid is used to emulate the Shisen-Sho grid by way of using only even row and column positions. Surprisingly, we can find connections with the first and third line horizontal and the second line vertical in the same way as for the Shisen-Sho grid, i.e. we test if the bitwise conjunction of the horizontal ranges of the tiles (as sequences of set bits) conincides with a free bit of the bitwise disjunction of the tile population vectors of the rows in between the tiles. There are $2$ differences with respect to the Shisen-Sho grid here: the tile population vectors of the rows are not in agreement with the real situation, and the computation of the horizontal ranges is different. Other types of connections between tiles on the same level are found in a similar way. For connections between tiles on a different level, the horizontal and vertical range of the lowest tile is replaced by just the tile itself, after which the connectivity of the tiles is tested in the level of the highest tile just as for tiles on the same level. There are $2$ versions of the program: one with the Shisen-Sho grid and one with the Mahjong grid. But in contrast to the above explanation, the version with the Shisen-Sho grid adopts three dimensions already. The version for the Mahjong grid is slower, but not very much, which is in agreement with the $\Omicron(1)$ extra complexity of the pair matching computations. \section{Sampling results} We sampled several layouts of xmahjongg and several rectangular layouts. To be able to sample Shisen-Sho on xmahjongg layouts, we had to use the Shisen-Sho solver for the Mahjong grid. We used this solver for the rectangular grids as well, so the width and height was limited by $32$. We made a new Mahjong Solitaire solver along with the Shisen-Sho solver (as a extra compiler option). This solver differs from the original Mahjong Solitaire solver in that it requires free tiles not to have any tile above them, instead of just adjacent tiles. But for layouts for which tile polulation is decreasing in every pillar, such as the xmahjongg layouts, this difference has no effect on the game. Since the new Mahjong Solitaire solver is about $3$ times as slow as the original Mahjong Solitaire solver, we used the original Mahjong Solitaire solver for the Mahjong Solitaire sampling results. For Shisen-Sho, we chose $1.15^{6 \sqrt{k}}$ as the number of attempts to solve a board by playing random matches, with $k$ being the number of groups with $4$ equal tiles. For Mahjong Solitaire, we chose $1.2^{6 \sqrt{k}}$ to be this number. This number was $1.2^k$ originally, which comes down to the same for layouts with $144$ tiles, such as the xmahjongg layouts. The foo layout and the bar layout are not part of xmahjongg. They were designed by me, and are as follows. \begin{center} \begin{tikzpicture}[x=2.5mm,y=2.5mm] \draw[fill=lightgray] (0,0) rectangle (18,20); \draw \foreach \x in {2,4,...,16} { (\x,0) -- (\x,20) }; \draw \foreach \y in {2,4,...,18} { (0,\y) -- (18,\y) }; \draw[fill=lightgray] (3,3) rectangle (15,17); \draw \foreach \x in {5,7,...,13} { (\x,3) -- (\x,17) }; \draw \foreach \y in {5,7,...,15} { (3,\y) -- (15,\y) }; \draw[fill=lightgray] (6,6) rectangle (12,14); \draw \foreach \x in {8,10} { (\x,6) -- (\x,14) }; \draw \foreach \y in {8,10,12} { (6,\y) -- (12,\y) }; \begin{scope}[shift={(23,1)}] \draw[fill=lightgray] (0,0) rectangle (20,18); \draw \foreach \x in {2,4,...,18} { (\x,0) -- (\x,18) }; \draw \foreach \y in {2,4,...,16} { (0,\y) -- (20,\y) }; \draw[fill=lightgray] (3,3) rectangle (17,15); \draw \foreach \x in {5,7,...,15} { (\x,3) -- (\x,15) }; \draw \foreach \y in {5,7,...,13} { (3,\y) -- (17,\y) }; \draw[fill=lightgray] (6,6) rectangle (14,12); \draw \foreach \x in {8,10,12} { (\x,6) -- (\x,12) }; \draw \foreach \y in {8,10} { (6,\y) -- (14,\y) }; \end{scope} \end{tikzpicture} \end{center} The foo and the bar layout have $3\cdot 4 + 6\cdot 7 + 9\cdot 10 = 144$ tiles, just like the xmahjongg layouts. The foo and the bar layout are the transpose of each other. The deepwell layout is selftranspose/symmetric. We used the Computational Science compute cluster of Radboud University Nijmegen for our layout sampling. The computations took about $5$ weeks. The longest computations run for about $3$ weeks. All Shisen-Sho computations were done in one run, but many of the (transposed) Mahjong Solitaire computations were aborted by system administration. For that reason, I had to write versions of the program for (transposed) Mahjong Solitaire which resume aborted computations. Computation time could not be restored, since it is only written at the end of the computation. We will present graphs of the samples: the exact sample results can be found in the source along with the program code. \begin{figure*}[!p] {\center \large \bf Proportion of winnable Shisen-Sho games for \\ rectangular layouts and xmahjongg layouts \\ } \bigskip \makebox[\textwidth][c]{\includegraphics{shgraph.pdf}} \end{figure*} \begin{figure*}[!p] {\center \large \bf Proportion of winnable Mahjong Solitaire games for \\ rectangular layouts and xmahjongg layouts \\ } \bigskip \makebox[\textwidth][c]{\includegraphics{mjgraph.pdf}} \end{figure*} \subsection{Rectangular layouts} We sampled rectangular layouts up to $32 \times 32$ inclusive. For layouts with an odd number of tiles, we removed one corner tile. For the $1 \times 1$ layout, this resulted in a void layout, which we did not sample. For other layouts with $2$ or $3$ tiles modulo $4$, we used one tile group with only $2$ equal tiles. For Shisen-Sho, the sample sizes are given on the left. For Mahjong Solitaire, these sample sizes were too time-consuming, so we adapted them as indicated on the right. \begin{center} \begin{tikzpicture}[x=0.5in,y=0.5in] \draw[fill=black!40] (0.125,0.125) -- (0,0.125) -- (0,1) -- (1,1) -- (1,0) -- (0.125,0) -- cycle; \draw[fill=black!35] (1,1) -- (0,1) -- (0,2) -- (2,2) -- (2,0) -- (1,0) -- cycle; \draw[fill=black!30] (2,2) -- (0,2) -- (0,3) -- (3,3) -- (3,0) -- (2,0) -- cycle; \draw[fill=black!25] (3,3) -- (0,3) -- (0,4) -- (4,4) -- (4,0) -- (3,0) -- cycle; \draw[anchor=west] (0.1875,0) node[rotate=-90] {$2$}; \draw[anchor=west] (0.9375,0) node[rotate=-90] {$8$}; \draw[anchor=west] (1.9375,0) node[rotate=-90] {$16$}; \draw[anchor=west] (2.9375,0) node[rotate=-90] {$24$}; \draw[anchor=west] (3.9375,0) node[rotate=-90] {$32$}; \draw[overlay,anchor=east] (0,0.1875) node {$2$}; \draw[overlay,anchor=east] (0,0.9375) node {$8$}; \draw[overlay,anchor=east] (0,1.9375) node {$16$}; \draw[overlay,anchor=east] (0,2.9375) node {$24$}; \draw[overlay,anchor=east] (0,3.9375) node {$32$}; \draw (0.5,0.5) node[rotate=-45] {$10^8$}; \draw (0.5,1.5) node[rotate=-45] {$10^7$}; \draw (1.5,0.5) node[rotate=-45] {$10^7$}; \draw (0.5,2.5) node[rotate=-45] {$10^6$}; \draw (2.5,0.5) node[rotate=-45] {$10^6$}; \draw (0.5,3.5) node[rotate=-45] {$10^5$}; \draw (3.5,0.5) node[rotate=-45] {$10^5$}; \begin{scope}[shift={(4.7,0)}] \draw[fill=black!40] (0.125,0.125) -- (0,0.125) -- (0,1) -- (1,1) -- (1,0) -- (0.125,0) -- cycle; \draw[fill=black!35] (1,1) -- (0,1) -- (0,2) -- (2,2) -- (2,0) -- (1,0) -- cycle; \draw[fill=black!30] (3,3) -- (3,0) -- (2,0) -- (2,1.5) \foreach \k in {1,...,4} { -- ++(-0.125,0) -- ++ (0,0.125) } -- (0,2) -- (0,3) -- cycle; \draw[fill=black!25] (4,4) -- (4,0) -- (3,0) -- (3,1.5) -- (2,1.5) -- (2,2) -- (1.5,2) -- (1.5,3) -- (0,3) -- (0,4) -- cycle; \draw[fill=black!20] (4,4) -- (4,1.5) -- (3,1.5) -- (3,1.75) \foreach \k in {1,...,10} { -- ++(-0.125,0) -- ++ (0,0.125) } -- (1.5,3) -- (1.5,4) -- cycle; \draw[fill=black!15] (4,4) -- (4,1.75) -- (3,1.75) -- (3,3) -- (1.75,3) --(1.75,4) -- cycle; \draw[anchor=west] (0.1875,0) node[rotate=-90] {$2$}; \draw[anchor=west] (0.9375,0) node[rotate=-90] {$8$}; \draw[anchor=west] (1.9375,0) node[rotate=-90] {$16$}; \draw[anchor=west] (2.9375,0) node[rotate=-90] {$24$}; \draw[anchor=west] (3.9375,0) node[rotate=-90] {$32$}; \draw[anchor=east] (1.4375,4) node[rotate=-90] {$12$}; \draw[anchor=east] (1.6875,4) node[rotate=-90] {$14$}; \draw[overlay,anchor=east] (0,0.1875) node {$2$}; \draw[overlay,anchor=east] (0,0.9375) node {$8$}; \draw[overlay,anchor=east] (0,1.9375) node {$16$}; \draw[overlay,anchor=east] (0,2.9375) node {$24$}; \draw[overlay,anchor=east] (0,3.9375) node {$32$}; \draw[overlay,anchor=west] (4,1.4375) node {$12$}; \draw[overlay,anchor=west] (4,1.6875) node {$14$}; \draw (0.5,0.5) node[rotate=-45] {$10^8$}; \draw (0.5,1.5) node[rotate=-45] {$10^7$}; \draw (1.5,0.5) node[rotate=-45] {$10^7$}; \draw (0.5,2.5) node[rotate=-45] {$10^6$}; \draw (2.5,0.5) node[rotate=-45] {$10^6$}; \draw (0.5,3.5) node[rotate=-45] {$10^5$}; \draw (3.5,0.5) node[rotate=-45] {$10^5$}; \draw (2.15,2.15) node[rotate=-45] {$10^5$}; \draw (2.6,2.6) node[rotate=-45] {$10^4$}; \draw (3.5,3.5) node[rotate=-45] {$10^3$}; \end{scope} \end{tikzpicture} \end{center} For Shisen-Sho, the proportion of winnable boards is the same as for the transposed layout. This yielded double sample size for the non-square rectangular layouts. To get double sample size for square layouts as well, we simply chose the samples twice as large. So the effective sample sizes are $200\,000$, $2\,000\,000$, $20\,000\,000$ and $200\,000\,000$. For Mahjongg Solitaire, we doubled the sample size by combining the sample results with those of the transposed layouts for transposed Mahjongg Solitaire. So the effective sample sizes are $2\,000$, $20\,000$, $200\,000$, $2\,000\,000$, $20\,000\,000$ and $200\,000\,000$. A percentage in the graph indicates that there were both winnable and impossible boards. \subsection{xmahjongg layouts} Having a compute cluster this time, we sampled the xmahjongg layouts $200$ times as many as in \cite{dB}. So we sampled the default layout $2\,000\,000\,000$ times and the other layouts $20\,000\,000$ times each. We sampled the default layout in $20$ threads of $100\,000\,000$. For Mahjong Solitaire, each chunk of $100\,000\,000$ boards took about $3$ weeks. For Shisen-Sho and transposed Mahjong Solitaire, this was less than $6$ days and $2$ days respectively. The graphs for transposed Mahjong Solitaire are at the right of those for Mahjong Solitaire. For the symmetric deepwell layout, we combined the results for Mahjong Solitaire and transposed Mahjong Solitaire, resulting in a sample size of $40\,000\,000$ for Mahjong Solitaire. The foobar layout is just the foo layout, extended with the sample results of the bar layout with transposed matching rules, resulting in a sample size of $40\,000\,000$ for all three game types. We obtained the following results for the default layout: \begin{center} \begin{tabular}{rl} Mahjong Solitaire: & 2.959 percent impossible \\ Transposed Mahjong Solitaire: & 1.756 percent impossible \\ Shisen-Sho: & 1.906 percent impossible \end{tabular} \end{center} The first $100\,000$ boards of the papillon layout are not winnable, so no winnable boards were found for this layout in \cite{dB}. But $16$ of the $20\,000\,000$ boards are winnable. The hourglass layout has no winnable boards in the sample of $20\,000\,000$ boards.
proofpile-arXiv_059-15735
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section*{Abstract} {\bf Fascinating structures have arisen from the study of the fractional quantum Hall effect (FQHE) at the even denominator fraction of $5/2$. We consider the FQHE at another even denominator fraction, namely $\nu=2+3/8$, where a well-developed and quantized Hall plateau has been observed in experiments. We examine the non-Abelian state described by the ``$\bar{3}\bar{2}^{2}1^{4}$" parton wave function and numerically demonstrate it to be a feasible candidate for the ground state at $\nu=2+3/8$. We make predictions for experimentally measurable properties of the $\bar{3}\bar{2}^{2}1^{4}$ state that can reveal its underlying topological structure. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} Strongly interacting two-dimensional electron systems at low-temperatures and subjected to high perpendicular magnetic fields exhibit a broad range of non-perturbative phenomena. A case in point is the fractional quantum Hall effect (FQHE) \cite{Tsui82, Laughlin83} which arises from the Coulomb repulsion between electrons and leads to the formation of incompressible states at Landau level (LL) fillings $\nu$. The $\nu=5/2$ FQHE, the first even denominator state to be observed~\cite{Willett87}, has produced a wide variety of remarkable concepts. Its most plausible explanation is in terms of the Moore-Read Pfaffian wave function~\cite{Moore91} or its particle-hole conjugate, the anti-Pfaffian~\cite{Levin07, Lee07}. The Pfaffian and anti-Pfaffian states represent topological $p$-wave paired states of fully spin-polarized composite fermions (CFs)~\cite{Jain89}, which are bound states of electrons and vortices~\cite{Read00}. Intriguingly, these states host non-Abelian excitations which could potentially be used to carry out fault-tolerant topological quantum computation~\cite{Nayak08}. This article is concerned with the physical origin of another even denominator fraction, namely $\nu=2+3/8$, for which convincing experimental evidence exists \cite{Xia04, Pan08, Choi08, Kumar10, Zhang12}. In particular, Kumar {\em et al.}~\cite{Kumar10} demonstrated activated magnetotransport at this fraction, fully confirming the formation of an incompressible state here. Theoretically, the $2+3/8$ state was first studied in Ref.~\cite{Toke08} where it was suggested that the inter-CF interaction could induce FQHE, however, the resulting paired state was shown to be distinct from the Pfaffian state. Subsequently, the anti-Pfaffian pairing of CFs was also ruled out at $2+3/8$~\cite{Hutasoit16}. Hutasoit \emph{et al.}~\cite{Hutasoit16} considered a Bonderson-Slingerland (BS) state and showed that the ground state $2+3/8$ is well-described by it. Like the Pfaffian and anti-Pfaffian states, the BS state also supports excitations that are expected to possess non-Abelian braid statistics~\cite{Bonderson08}. In this work, we propose that the $2+3/8$ state could be described by a non-Abelian parton wave function which is topologically distinct from the BS state. We show that the parton state is in close competition with the BS state. Furthermore, to tell the two states apart we make predictions for many experimentally measurable properties. Definitive confirmation of the parton state at $2+3/8$ would lend further credence to the compelling assertion that \emph{all} the experimentally observed FQHE states in the second LL (SLL) of GaAs conform to the parton paradigm~\cite{Balram19}. The parton theory~\cite{Jain89b} (for a review of parton states see Sec.~\ref{subsec: parton_states}) has seen a resurgence in recent years owing to the following observations: \begin{itemize} \item \emph{All} the FQHE states observed in the second Landau level of GaAs plausibly lend themselves to a description in terms of partons. The $\bar{n}\bar{2}1^{3}$ states (this notation is elucidated in detail in Sec.~\ref{subsec: parton_states}) for $n=1,2,3$ capture the FQHE seen at $8/3,~5/2$ and $2+6/13$ respectively~\cite{Balram18,Balram18a,Balram20} (see also Appendix~\ref{sec: 6_13_parton} where we show new results to further demonstrate the viability of the $\bar{3}\bar{2}1^{3}$ state for $2+6/13$). Furthermore, the $\bar{n}\bar{2}^{2}1^{4}$ states for $n=1,2$ capture the experimentally observed plateaus at $5/2$ and $12/5$~\cite{Balram19}. The $\bar{n}\bar{2}1^{3}$ and $\bar{n}\bar{2}^{2}1^{4}$ sequences correspond to states at $\nu=2n/(5n-2)$ and $n/(3n-1)$ respectively and FQHE in the SLL has been observed up to $n=3$ at \emph{all} these fillings. In GaAs quantum wells, FQHE has \emph{only} been observed at the aforementioned fillings in the SLL aside from fractions that correspond to the $n/(4n\pm 1)$ sequence. The SLL states in the $n/(4n\pm 1)$ sequence are believed to be analogous to their LLL counterparts~\cite{Kusmierz18} (See Appendix~\ref{sec: 4CF_SLL} where we show results supporting this belief for $1/5$, $2/7$ and $2/9$.). The $\bar{n}\bar{2}1^{3}$ and $\bar{n}\bar{2}^{2}1^{4}$ are related to each other by the symmetric state $\bar{2}1$, i.e., they are the $p=1,2$ members of the $\bar{n}1^{2}(\bar{2}1)^{p}$ family of states respectively. This is analogous to the primary and secondary Jain states, described respectively by the $n1^{2}$ and $n1^{4}$ states ($p=1,2$ members of the $n1^{2p}$ family of Jain states), which are related by the symmetric factor $1^{2}$. Unlike the factor $1^{2}$ which lends itself to a picture in terms of attachment of vortices/fluxes, we do not know of a simple interpretation for the factor $\bar{2}1$. \item The $n\bar{n}1^{3}$ states, which possess a novel $\mathbb{Z}_{n}$ topological order that violates the bulk-edge correspondence~\cite{Moore91}, could potentially be relevant to the $7/3$ FQHE~\cite{Balram19a, Faugno20b}. \item The $\bar{2}^{k}1^{k+1}$ states lie in the same universality class as the particle-hole conjugate of the $k$-cluster Read-Rezayi~\cite{Read99} (anti-RR$k$ or aRR$k$) states~\cite{Balram19}. A nice feature of these parton states is that their wave functions can be evaluated for very large system sizes of the order of $N=100$. In contrast, the wave functions of the Read-Rezayi states can be evaluated only for systems sizes of the order of $N=30$. \item The $221$ and $221^{3}$ states may apply to certain even-denominator FQHE states observed in graphene~\cite{Wu16, Kim18} and wide quantum wells~\cite{Faugno19} respectively. \item The $\bar{3}^{2}1^{3}$ state at $3/7$ was considered in Ref.~\cite{Faugno20a} and was shown to be feasible at and in the vicinity of the SLL (see also Appendix~\ref{sec: 3_7_parton} where we show new results that further support the viability of the $\bar{3}^{2}1^{3}$ state in the SLL). Although FQHE has not been established at $\nu=2+3/7$, some signatures of it have been reported~\cite{Choi08}. \end{itemize} In light of these exciting recent developments in the parton theory, we revisit the FQHE observed at $2+3/8$. We consider the $n=3$ member of the $\bar{n}\bar{2}^{2}1^{4}$ sequence and show that it is a viable candidate to capture the ground state at $2+3/8$. Our results exhibit that the $2+3/8$ FQHE dovetails nicely with the parton description of SLL states. Experimentally, FQHE has been well-established at $\nu=2+3/8$~\cite{Xia04, Pan08, Choi08, Kumar10, Zhang12} only in GaAs quantum wells. The LL corresponding to the $n=1$ orbital in the zeroth LL of bilayer graphene (BLG) is expected to be similar to the SLL of GaAs. Indeed, almost all FQHE states seen in the SLL of GaAs or their particle-hole conjugates have also been observed in BLG~\cite{Zibrov17}. To the best of our knowledge, the only exception happens to be $3/8$, where FQHE has been observed in the SLL of GaAs but surprisingly no FQHE is seen either at $3/8$ or its particle-hole conjugate filling $5/8$ in BLG~\cite{Zibrov17}. The absence of FQHE at these fractions in BLG likely results from the fact that the single-particle wave function for the $n=1$ orbital in the zeroth LL of BLG has an admixture of the $n=0$ LL of ordinary semiconductors. Thus the effective interaction between the electrons is modified from the pristine $n=1$ LL one. Presumably, the modified interaction leads to the formation of a bubble crystal ground state at $3/8$ and $5/8$ in BLG. It is possible that with a small change in the interaction between the electrons (due to screening by gates and/or LL mixing, etc.) the FQHE liquid at $3/8$ could emerge in BLG. This article is organized as follows: In Sec.~\ref{sec: background} we provide some background material on the spherical geometry, a primer on the parton states, and introduce the candidate parton wave function for the $2+3/8$ ground state. In Sec.~\ref{sec: results} we present results obtained from variational Monte Carlo (Sec.~\ref{subsec: VMC}) and exact diagonalization (Sec.~\ref{subsec: ED}) calculations of the 3/8 state in the SLL. In Sec.~\ref{sec: experimental_ramifications} we present the experimental ramifications of our work and conclude the paper in Sec.~\ref{sec: conclusions} with a summary of our results. \section{Background} \label{sec: background} \subsection{Spherical geometry} Comparisons with exact states available for finite systems have played an important role in confirming or eliminating various candidate FQHE states. For this purpose, we will employ Haldane's spherical geometry~\cite{Haldane83} where $N$ electrons reside on the surface of a sphere and a magnetic monopole of strength $2Q\phi_{0}$ (where $\phi_0=hc/e$ is a flux quantum) produces a radial magnetic field. The radius of the sphere $R=\sqrt{Q}\ell$, where $\ell=\sqrt{\hbar c/(eB)}$ is the magnetic length and $B$ is the perpendicular magnetic field. In the LL indexed by $n$, the total number of single-particle orbitals is $2l+1=2(Q+n)+1$. Quantum Hall ground states on the sphere are uniform, i.e., have total orbital angular momentum $L=0$. An incompressible state at a filling factor $\nu$ occurs at $2l=\nu^{-1}N-\mathcal{S}$, where $\mathcal{S}$ is a topological quantum number called the shift~\cite{Wen92}. Often candidate states at the same filling factor occur at different shifts. In this work, we shall evaluate the ground-state energies of different candidate states to determine which among them is energetically favored. The total energy includes the contribution of the positively charged background which we assume interacts via the $1/r$ Coulomb potential. Assuming a uniform distribution of the background charge on the sphere, the electron-background and background-background interactions collectively contribute $-N^{2}/(2\sqrt{Q})\ell$ to the energy. We multiply the per-particle energies by a factor of $\sqrt{2Q\nu/N}$ before extrapolating them to the thermodynamic limit~\cite{Morf86}. This factor corrects for the deviation of the electron density of a finite system from its $N\rightarrow \infty$ value, thereby providing a more accurate extrapolation. All the energies are quoted in units of $e^2/(\epsilon\ell)$, where $\epsilon$ is the dielectric constant of the host. An important feature of an incompressible FQHE state is the existence of a finite gap to neutral and charged excitations. The neutral gap is defined as the difference between the two lowest energy states at a given value of $N$ and $2l$. The charge (or transport) gap is defined as the energy required to create a far-separated pair of fundamental (smallest magnitude charge) quasiparticle and quasihole. From exact diagonalization, the charge gap for a system of $N$ electrons at a given value of $2l$ can be obtained as: \begin{eqnarray} \label{eq: charge_gap} \Delta^{\rm charge}&=&\frac{\mathcal{E}(2l-1)+\mathcal{E}(2l+1)-2\mathcal{E}(2l)}{n_{q}}, \\ \mathcal{E}(2l )&=&E(2l)-N^{2} \frac{\mathcal{C}(2l)}{2}. \nonumber \end{eqnarray} Here $E(2l)$ is the exact ground state energy of $N$ electrons at $2l$, $\mathcal{C}(2l)$ is the average charging energy at $2l$ which accounts for the background contribution~\cite{Balram20}, and $n_{q}$ is the number of fundamental quasiholes (quasiparticles) produced upon the insertion (removal) of a single flux quantum in the ground state. As mentioned above, for the $1/r$ Coulomb interaction, $\mathcal{C}(2l)=1/\sqrt{l-n}~e^{2}/(\epsilon\ell)=1/\sqrt{Q}~e^{2}/(\epsilon\ell)$. Next, we state the approximations involved in our calculations which are routinely deployed in numerical studies of FQHE systems. In experiments, the finite magnetic field leads to LL mixing which breaks the degeneracy between a state and its particle-hole conjugate. However, throughout this work, we will make the simplifying assumption of neglecting LL mixing and thereby treat states related by particle-hole conjugation at the same footing. In the absence of LL mixing, we can focus our attention on a single LL. We shall discuss below all physics within the LLL subspace, even though we are interested in the second LL physics. This is possible because the problem of electrons in the SLL interacting via the Coulomb interaction is mathematically equivalent to the problem of electrons in the LLL interacting with an effective interaction that has the same Haldane pseudopotentials~\cite{Haldane83} as the Coulomb interaction in the second LL. In this work, we use the effective interaction given in Ref.~\cite{Shi08} to simulate the physics of the SLL in the LLL. Aside from the spherical pseudopotentials, we shall also show results obtained from the disk pseudopotentials which are believed to provide a more reliable approach to the thermodynamic limit. Unless otherwise stated we shall assume the electrons to be fully spin-polarized. We will also neglect the effects of screening and disorder, which alter the form of the interaction and produce corrections to various observable quantities. Studies that take into account these effects will be needed for a more detailed and quantitative comparison with experiments. \subsection{Parton states} \label{subsec: parton_states} The parton theory, introduced by Jain~\cite{Jain89b}, provides a scheme to construct candidate FQHE states. In the parton approach, one envisages dividing each electron into fictitious sub-particles called ``partons," placing each species of partons in an integer quantum Hall effect (IQHE) state with filling $n_{\beta}$ (here $\beta$ labels the different parton species), and finally sticking the partons back together to recover the physical electrons. The resulting parton state is denoted by ``$n_1 n_2\cdots$'' and its wave function is given by \begin{equation} \Psi_{\nu}^{n_1 n_2\cdots} = \mathcal{P}_{\rm LLL}\prod_{\beta} \Phi_{n_{\beta}}(\{z_{k}\}). \label{eq:parton_general} \end{equation} Here $\Phi_n$ is the Slater determinant wave function of the IQHE state with $n$ filled LLs of electrons, $z_{k}=x_{k}-iy_{k}$, $k=1,2,\cdots, N$ is the two-dimensional coordinate of the $k^{\rm th}$ electron parametrized as a complex number, and $\mathcal{P}_{\rm LLL}$ projects the state into the LLL. We will denote a negative integer as $\bar{n}=-n$ with $\Phi_{\bar{n}}\equiv \Phi_{-n}=[\Phi_n]^*$. Note that each of the constituent IQHE states is itself made up of \emph{all} of the electrons. The charge of the $\beta$ parton species is given by $e_{\beta}=-\nu e/n_\beta$, which is consistent with the constraint that charges of the partons add to that of the electron, i.e., $\sum_\beta e_\beta=-e$, where $-e$ is the charge of the electron. The wave function given in Eq.~(\ref{eq:parton_general}) occurs at the filling factor $\nu=\left[\sum_\beta n_\beta^{-1}\right]^{-1}$ and has a shift~\cite{Wen92} $\mathcal{S}=\sum_\beta n_\beta$ in the spherical geometry. Thus the shift of any parton state is always an integer and therefore FQHE states with a non-integral shift~\cite{Levin09,Balram15,Balram16c} cannot be directly (not allowing for operations like particle-hole conjugation) obtained from a parton construction. One can generalize the parton construction to allow the partons themselves to form FQHE states (which can have a fractional shift~\cite{Balram15,Balram16c}) that can then result in FQHE states of electrons with a non-integral shift. Many well-known FQHE states such as the Laughlin and Jain (composite fermion) states can be obtained from the parton construction. The $\nu=1/(2p+1)$ Laughlin state is a $(2p+1)$-parton state where each of the partons forms a $\nu=1$ IQHE state. The Laughlin state is denoted as ``$11\cdots 1$" [$(2p+1)~1$s] and its wave function is given by $\Psi^{\rm Laughlin}_{1/(2p+1)}=\Phi^{2p+1}_{1}$. The $\nu=n/(2pn\pm 1)$ Jain state is a $(2p+1)$-parton state where $2p$ partons form a $\nu=1$ IQHE state and a single parton forms a $\nu=\pm n$ IQHE state. The Jain state is denoted as ``$\pm n11\cdots 1$" [$2p~1$s] and its wave function is given by $\Psi^{\rm Jain}_{n/(2pn\pm 1)} = \mathcal{P}_{\rm LLL}\Phi_{\pm n}\Phi^{2p}_{1}$. The parton theory allows us to construct states which go beyond the CF description. As we shall see below, the parton state of our interest is of the non-CF kind. \subsection{Trial states at 3/8} In this article, we shall concern ourselves with the $n=3$ member of the $\bar{n}\bar{2}^{2}1^{4}$ sequence of states at $\nu=n/(3n-1)$ which was posited in Ref.~\cite{Balram19}. The $n=1$ and $n=2$ members of this sequence lie in the same phase as the anti-Pfaffian~\cite{Levin07, Lee07} at $\nu=1/2$~\cite{Balram18} and the aRR$3$ state~\cite{Read99} at $\nu=2/5$~\cite{Balram19}, which respectively, likely describe the experimentally observed FQHE at $\nu=5/2$ and $12/5$. The wave function of the $\bar{3}\bar{2}^{2}1^{4}$ state, which occurs at $\nu=3/8$, is given by: \begin{equation} \Psi_{3/8}^{\bar{3}\bar{2}^{2}1^{4}} = \mathcal{P}_{\rm LLL} [\Phi^{*}_{3}] [\Phi^{*}_{2}]^{2}\Phi^{4}_{1} \sim \frac{\Psi^{\rm Jain}_{3/5}[\Psi^{\rm Jain}_{2/3}]^{2}}{\Phi^{2}_{1}} , \label{eq: parton_3_8} \end{equation} where the $\sim$ sign indicates that the states either side of the sign differ in the details of how the projection to the LLL is implemented. Although, the two wave functions either side of the $\sim$ sign differ microscopically, we expect that they describe the same topological phase~\cite{Balram16b}. A nice feature of the Jain wave functions is that they can be evaluated for hundreds of electrons in \emph{real space} (in first quantized form) using the Jain-Kamilla projection~\cite{Jain97b,Moller05,Jain07,Davenport12,Balram15a}. Therefore, the form of the $\bar{3}\bar{2}^{2}1^{4}$ wave function stated on the right-most end of Eq.~(\ref{eq: parton_3_8}) allows accessibility to large system sizes and would be used throughout this work. The shift of the above state in the spherical geometry is $\mathcal{S}^{\bar{3}\bar{2}^{2}1^{4}}=-3$. The $n=4$ member of the $\bar{n}\bar{2}^{2}1^{4}$ sequence produces a state at $\nu=4/11$ where FQHE has not yet been observed in the SLL. We note that there is clear evidence for FQHE at $\nu=4/11$ in the LLL~\cite{Samkharadze15b, Pan15} and for its description, a parton state was recently proposed~\cite{Balram21}. Another candidate state for the $2+3/8$ FQHE is the Bonderson-Slingerland (BS) state~\cite{Bonderson08} that is described by the wave function: \begin{equation} \Psi_{3/8}^{\rm BS} = \mathcal{P}_{\rm LLL} {\rm Pf}\left( \frac{1}{z_{i}-z_{j}} \right) [\Phi^{*}_{3}] \Phi^{3}_{1} \sim {\rm Pf}\left( \frac{1}{z_{i}-z_{j}}\right)\Phi_{1}\Psi^{\rm Jain}_{3/5} , \label{eq: BS_3_8} \end{equation} where ${\rm Pf}$ is the Pfaffian of an anti-symmetric matrix with the square of the Pfaffian being the determinant. The shift of the BS state, $\mathcal{S}^{\rm BS}=1$, is different from that of the state given in Eq.~(\ref{eq: parton_3_8}) indicating that the two states carry different topological orders~\cite{Wen92}. The wave function of Eq.~(\ref{eq: BS_3_8}) was shown to be a good candidate to describe the $2+3/8$ FQHE~\cite{Hutasoit16}. In particular, for the only system of $N=12$ electrons accessible to exact diagonalization, the 3/8 BS state has a good overlap of about 80\% with the SLL Coulomb ground state~\cite{Hutasoit16}. The $3/8$ BS state is the $n=3$ member of the family of states defined by the BS wave function $\Psi_{n/(3n-1)}^{\rm BS} = \mathcal{P}_{\rm LLL} {\rm Pf}\left( [z_{i}-z_{j}]^{-1}\right) [\Phi^{*}_{n}] \Phi^{3}_{1} \sim {\rm Pf}\left( [z_{i}-z_{j}]^{-1}\right)\Phi_{1}\Psi^{\rm Jain}_{n/(2n-1)}$, which describes states at $\nu=n/(3n-1)$ [same fillings as the $\bar{n}\bar{2}^{2}1^{4}$ sequence mentioned above]. The $n=1$ member is the same as the Moore-Read Pfaffian state at $\nu=1/2$~\cite{Moore91}. The $n=2$ member describes a state at $\nu=2/5$ and has been put forth as a candidate state to describe the $12/5$ FQHE~\cite{Bonderson12}. The $n=4$ member provides a state at $\nu=4/11$ where FQHE has not yet been observed in the SLL. \section{Results} \label{sec: results} \subsection{Variational Monte Carlo} \label{subsec: VMC} \begin{figure}[htpb] \begin{center} \includegraphics[width=0.47\textwidth,height=0.23\textwidth]{3_8_fp_n_1_LL_Chuntai.pdf} \caption{(color online) Thermodynamic extrapolations of the per-particle Coulomb energies for the Bonderson-Slingerland state (blue crosses) and the $\bar{3}\bar{2}^{2}1^{4}$ state (red dots) for $\nu=3/8$ in the second Landau level. The extrapolated energies, obtained from a quadratic fit in $1/N$, are quoted in Coulomb units of $e^2/(\epsilon\ell)$ on the plot. The energies include contributions of the background positive charge and have been density-corrected~\cite{Morf86}. } \label{fig: extrapolations_energies_3_8} \end{center} \end{figure} We compare the Coulomb energies of the $\bar{3}\bar{2}^{2}1^{4}$ parton and the $3/8$ BS states in the second Landau level in Fig.~\ref{fig: extrapolations_energies_3_8}. We find that the two states are energetically competitive with each other with the BS state having slightly lower energy in the thermodynamic limit. We mention here a couple of important caveats concerning these energetic comparisons. Firstly, in our variational calculations, we have made several approximations such as neglecting the effects of LL mixing, finite-width of the quantum well, screening, and disorder. When candidate states are close in energy, the precise nature of the ground state stabilized in experiments can only be determined by taking into account these effects. This is clearly beyond the scope of the current work. Secondly, due to technical difficulties, the projection of the parton and BS states was carried out as stated in the extreme right-hand side of Eqs.~(\ref{eq: parton_3_8}) and (\ref{eq: BS_3_8}) respectively. Although we expect these versions of the wave functions to lie in the right topological phase, they may not be their best microscopic representatives. Therefore, even though the $3/8$ BS state has slightly lower energy than the $\bar{3}\bar{2}^{2}1^{4}$ state for the exact second LL Coulomb point, it does not necessarily imply that the experimentally observed $2+3/8$ state is in the same topological phase as the BS state. An analogous situation arises for the $12/5$ FQHE where two different topological orders are in close competition with each other. The $2/5$ BS state~\cite{Bonderson08} and the aRR$3$ state~\cite{Read99} have nearly identical energies in the second Landau level and both states provide fairly good representations of the exact SLL Coulomb ground state~\cite{Read99, Bonderson12}. However, recent studies of the entanglement spectra of the $12/5$ Coulomb ground state indicate that it likely lies in the same topological phase as the aRR$3$ state~\cite{Zhu15, Mong15, Pakrouski16}. Recently, in Ref.~\cite{Balram19} it was shown that the $\bar{2}^{3}1^{4}$ state lies in the same topological phase as the aRR$3$ state though the parton state does not provide as good a representation of the SLL Coulomb state as the aRR$3$ state does. Therefore, although the $2/5$ BS state is lower in energy than the $\bar{2}^{3}1^{4}$ state in the SLL, the $12/5$ Coulomb state likely lies in the same topological phase as that described by the $\bar{2}^{3}1^{4}$ state. We conclude that our variational calculations are not able to decisively determine the nature of the ground state at $2+3/8$. This situation in the SLL should be contrasted with that in the LLL where variational energetic comparisons based on the CF theory can decisively determine the nature of the ground state. The reason is that unlike the trial states in the SLL, the CF states provide a near-perfect representation of the accessible LLL Coulomb ground states obtained from exact diagonalization~\cite{Dev92, Jain07, Balram13, Yang19a}. \subsection{Exact diagonalization} \label{subsec: ED} Next, we present results obtained from exact diagonalization in the SLL. The shifts of the competitive candidate states can be identified from the existence of robust charge and neutral gaps and the presence of downward cusps in the ground-state energies for a fixed number of particles as the flux through the sphere is varied. In Fig.~\ref{fig: N_12_fix_N_sweep_2l}, we show the ground-state energy as well as the charge and neutral gaps for the smallest system of $N=12$ electrons for $2l=28$ to $39$ in the SLL for both the spherical and disk pseudopotentials. We find a clearly discernible downward cusp in the ground-state energy at $2l=35$ which corresponds to the proposed $3/8$ parton state. Moreover, the state at $2l=35$ harbors a robust charge and neutral gap. We find similar features at the values of $2l$ corresponding to other candidate states in this range, namely $2l=33$ for $7/3$ and $2l=28$ for $2+6/13$. In contrast, at the value corresponding to the $3/8$ BS state, $2l=31$, we do not find a prominent downward cusp in the ground-state energy, or a robust charge gap. These results from exact diagonalization thus favor the parton description of $2+3/8$ over the BS state. The next system size for which the $\bar{3}\bar{2}^{2}1^{4}$ and $3/8$ BS states can be constructed on the sphere is $N=18$, which is currently not accessible to exact diagonalization since their Hilbert space dimension is over 324 billion and 60 billion respectively. \begin{figure}[htpb] \begin{center} \includegraphics[width=0.47\textwidth,height=0.23\textwidth]{N_12_2l_28_to_39_sphere.pdf} \\ \includegraphics[width=0.47\textwidth,height=0.23\textwidth]{N_12_2l_28_to_39_disk.pdf} \caption{(color online) A plot of the second-Landau-level Coulomb ground-state energies (green dots) and neutral (blue squares) and charge (red diamonds) [using $n_{q}=1$ in Eq.~(\ref{eq: charge_gap})] gaps for $N=12$ electrons as a function of the shell-angular momentum $2l$ obtained using exact diagonalization in the spherical geometry with the spherical [top panel (a)] and disk [bottom panel (b)] pseudopotentials. A pronounced downward cusp in the ground-state energy and robust charge and neutral gaps are seen at $2l=35$ which corresponds to the $\bar{3}\bar{2}^{2}1^{4}$ state at $3/8$. For reference we have also marked other states such as $2l=28$ [$\bar{3}\bar{2}1^{3}$ state at $\nu=6/13$ ($\bar{3}\bar{2}1^{3}$)~\cite{Balram18a}], $2l=31$ [$\nu=3/8$ Bonderson-Slingerland (3/8 BS)~\cite{Bonderson08}] and $2l=33$ [$\nu=1/3$ Laughlin (L)~\cite{Laughlin83}]. The dotted lines are a guide to the eye. } \label{fig: N_12_fix_N_sweep_2l} \end{center} \end{figure} We now focus our attention on the system of $N=12$ electrons at $2l=35$ that has a Hilbert space dimension of about 16 million. For this system, we find that the exact Coulomb ground state in the SLL for both the spherical and disk pseudopotentials is uniform, i.e., has $L=0$. The exact ground states obtained using the disk and spherical pseudopotentials have an overlap of 0.99 which indicates that they are quite close to each other. Due to technical reasons, it has not been possible for us to obtain the Fock-space representation (second quantized form) of the $\bar{3}\bar{2}^{2}1^{4}$ state, which precludes an accurate calculation of its overlap with the exact ground states or a comparison of their entanglement spectrum~\cite{Li08} for even the $N=12$ system [Using a Monte Carlo calculation we estimate that the overlap of $\bar{3}\bar{2}^{2}1^{4}$ state [projected to the LLL as stated in Eq.~(\ref{eq: parton_3_8})] with the exact Coulomb ground state obtained using the SLL disk pseudopotentials for $N=12$ electrons to be 0.63(4). The number in the parenthesis indicates statistical uncertainty of the Monte Carlo estimate.]. However, we have compared the pair-correlation function $g(r)$ of the exact SLL Coulomb ground state with that of the $\bar{3}\bar{2}^{2}1^{4}$ state for $N=12$ electrons (see Fig.~\ref{fig: pair_correlations_3_8}). The $g(r)$ of both states show oscillations that decay at long distances, which is a characteristic feature of an incompressible state~\cite{Kamilla97, Balram15b, Balram17}. The agreement between the $g(r)$ of the exact ground state and the parton state is on par with trial states at other filling factors in the SLL~\cite{Moller08,Balram20}. We note that the $\bar{3}\bar{2}^{2}1^{4}$ state shows a ``shoulder''-like feature in the $g(r)$ at short to intermediate lengths, which is considered a typical fingerprint of clustering in non-Abelian states~\cite{Read99}. For the $N=12$ system, the exact energy for the effective interaction we use to simulate the physics of the SLL in the LLL is $-0.4041$. In comparison, the $\bar{3}\bar{2}^{2}1^{4}$ state has an energy of $-0.3993(2)$ for the same interaction, where the number in the parenthesis is the statistical uncertainty in the Monte Carlo estimate of the energy of the $\bar{3}\bar{2}^{2}1^{4}$ state. Although not definitive, this level of agreement is comparable with that of other candidate states in the SLL~\cite{Hutasoit16,Balram20}. \begin{figure}[htpb] \begin{center} \includegraphics[width=0.47\textwidth,height=0.23\textwidth]{12_SLL_candidate_pair_correlations_arc.pdf} \caption{(color online) The pair correlation function $g(r)$ as a function of the arc distance $r$ on the sphere for the exact second Landau level Coulomb ground state (red filled dots), and the $\bar{3}\bar{2}^{2}1^{4}$ state of Eq.~(\ref{eq: parton_3_8}) (blue open circles) for $N=12$ electrons at $2l=35$.} \label{fig: pair_correlations_3_8} \end{center} \end{figure} Next, we turn to charge and neutral gaps. Since only a single system size is accessible to exact diagonalization, we do not have estimates for the thermodynamic gaps of the $2+3/8$ state. Nevertheless, we have evaluated the gaps for the system of $N=12$ electrons. The neutral gap is evaluated by taking the energy difference between the two lowest-energy states of $N=12$ electrons at $2l=35$. To calculate the charge gap, we use Eq.~(\ref{eq: charge_gap}) with $n_{q}=6$ since the insertion of a single flux quantum in the $\bar{3}\bar{2}^{2}1^{4}$ state produces six fundamental quasiholes each of charge $e/16$. The neutral and charge gaps for $N=12$, evaluated using exact diagonalization with both the spherical and disk pseudopotentials, are about $0.015$ $e^2/(\epsilon\ell)$ and $0.003$ $e^2/(\epsilon\ell)$ respectively. For comparison, these gaps are smaller than the corresponding gaps at $7/3$, which indicates that the $2+3/8$ state is more fragile compared to the $7/3$ state~\cite{Balram20}. The gap calculations indicate strong finite-size effects in the second LL as evidenced by the fact that the neutral gap is larger than the charge gap. In the thermodynamic limit, we expect the charge gap to be greater than or equal to the neutral gap. \begin{figure}[htpb] \begin{center} \includegraphics[width=0.47\textwidth,height=0.23\textwidth]{N_12_bar3bar2bar21111_SLL_finite_width_gaps.pdf} \caption{(color online) The second Landau level charge (red diamonds) and neutral (blue squares) gaps for $N=12$ electrons at the shift corresponding to the $\bar{3}\bar{2}^{2}1^{4}$ state at $\nu=3/8$ evaluated in the spherical geometry using the spherical (filled symbols) and disk pseudopotentials (open symbols) for various well-widths $w$.} \label{fig: gaps_3_8_SLL_bar3bar2bar21111} \end{center} \end{figure} In the experiment of Ref.~\cite{Kumar10}, the $2+3/8$ state is seen in a sample of width $w=30$ nm at a magnetic field of $B=5.2~{\rm T}$ which corresponds to $w/\ell=2.7$. To incorporate the effect of the finite thickness of the quantum well, we consider a model in which the transverse wave function is taken to be the ground state for a particle in a box of width $w$ with hard-core walls. We calculate the pseudopotentials for this model of the finite-width interaction and carry out exact diagonalization with them~\cite{Balram20}. Encouragingly, we find that the ground state of $N=12$ at $2l=35$ is uniform for both the spherical and disk pseudopotentials for (at least) $w\leq 10\ell$. Moreover, the overlap between the exact ground states at $w=0$ and $w=10\ell$ is greater than $95\%$ which suggests that the ground state is only weakly altered by finite-width corrections. Furthermore, as shown in Fig.~\ref{fig: gaps_3_8_SLL_bar3bar2bar21111} the ground state has a finite charge and neutral gap for all the widths considered with the gaps decreasing weakly with increasing widths. These results indicate that the ground state at $2+3/8$ is resistant to finite-width perturbations. \section{Experimental ramifications} \label{sec: experimental_ramifications} We believe that the results shown in the previous section make a strong case for the plausibility of the $\bar{3}\bar{2}^{2}1^{4}$ ansatz at $2+3/8$. In this section we deduce some of its experimental consequences which allow its validity to be assessed and provides ways to unambiguously distinguish it from the $3/8$ BS state. Owing to the repeated factor of $\bar{2}$, the quasiparticles of the $\bar{3}\bar{2}^{2}1^{4}$ obey non-Abelian braid statistics~\cite{Wen91}. An additional quasiparticle in the factor $\Phi_{\bar{3}}$ has charge $q_{\bar{3}} = -e/8$, whereas that in the factor $\Phi_{\bar{2}}$ has charge $q_{\bar{2}} = -3e/16$. A combination of a quasihole in $\Phi_{\bar{3}}$ and a quasiparticle in $\Phi_{\bar{2}}$ leads to the fundamental quasiparticle of the $\bar{3}\bar{2}^{2}1^{4}$ state which carries a charge of $q_{\bar{2}}-q_{\bar{3}}=-e/16$. The $3/8$ BS state is also non-Abelian and its smallest charged quasiparticle also has a charge of $-e/16$~\cite{Hutasoit16}. Due to the presence of the $\bar{3}$ factor, both the $\bar{3}\bar{2}^{2}1^{4}$ and $3/8$ BS states support upstream neutral modes. Thus, a measurement of the charge of the fundamental quasiparticle or the presence of neutral modes does not allow us to distinguish the parton and BS states. We now mention experimental measurements that can distinguish between the parton and BS orders. Owing to the different shifts, the parton and BS states have different Hall viscosities~\cite{Read09} $\eta_{\rm H} = 3\hbar \mathcal{S}/(64\pi\ell^{2})$, where $\mathcal{S}$ is the shift of the state in the spherical geometry~\cite{Wen92} with $\mathcal{S}^{\bar{3}\bar{2}^{2}1^{4}}=-3$ and $\mathcal{S}^{3/8~{\rm BS}}=1$. Furthermore, \emph{assuming} full equilibration of the edge modes the thermal Hall conductance of the $3/8$ BS state is $\kappa^{\rm BS}_{xy}=-1/2[\pi^2 k_{\rm B}^2 /(3h)]T$~\cite{Hutasoit16}, which is different from that of the parton state, which has $\kappa^{\bar{3}\bar{2}^{2}1^{4}}_{xy}=-5/2[\pi^2 k_{\rm B}^2 /(3h)]T$~\cite{Balram19} [the two lowest filled LLs with spin up and spin down provide an additional contribution of $2[\pi^2 k_{\rm B}^2 /(3h)]T$ to $\kappa_{xy}$]. The thermal Hall effect has been measured at certain fillings in the second Landau level~\cite{Banerjee17b} and thus could potentially distinguish the parton and BS states. We also mention here other candidates that have been put forth for $3/8$. Jolicoeur~\cite{Jolicoeur07} has proposed the following state at $3/8$: \begin{equation} \Psi^{\rm Jolicoeur}_{3/8}=\mathcal{P}_{\rm LLL} [\Psi^{\rm bosonic-RR}_{3}]^{*}\Phi_{1}^{3}, \label{Jolicoeur_like_wf_p_1} \end{equation} where the bosonic version of the six-cluster Read-Rezayi (RR) state~\cite{Read99} is defined as: \begin{equation} \Psi^{\rm bosonic-RR}_{3}=\mathbb{S}[\prod_{i_{1}<j_{1}}(z_{i_{1}}-z_{j_{1}})^2\cdots \prod_{i_{6}<j_{6}}(z_{i_{6}}-z_{j_{6}})^2], \end{equation} where $\mathbb{S}$ denotes the operation of symmetrization of the $N$ particles over the six clusters. The Jolicoeur state has a shift of $\mathcal{S^{\rm Jolicoeur}}=1$, which is different from the $\bar{3}\bar{2}^{2}1^{4}$ state but is identical to that of the $3/8$ BS state. The $3/8$ Jolicoeur wave function is not easily amenable to a numerical calculation and its properties have not been studied in detail in the literature. Using the effective edge theory-based classification, Fr{\"o}hlich {\em et al.}~\cite{Frohlich97} obtained a chiral Abelian state at $3/8$. However, there is no prescription to construct a trial wave function from this approach which precludes its comparison with numerics. The anti-Pfaffian analog of the $3/8$ BS state is given by: \begin{equation} \Psi_{3/8}^{\rm aPf-BS} = \mathcal{P}_{\rm LLL} \Psi_{1/2}^{\rm aPf} [\Phi^{*}_{3}] \Phi_{1} , \label{eq: aPf_BS_3_8} \end{equation} where $\Psi_{1/2}^{\rm aPf}$ is the anti-Pfaffian state at $\nu=1/2$~\cite{Levin07,Lee07}, which is the particle-hole conjugate of the Pfaffian state i.e., $\Psi_{1/2}^{\rm aPf} = \mathcal{P}_{\rm ph} \left( {\rm Pf}\left[ (z_{i}-z_{j})^{-1} \right] \Phi^{2}_{1} \right)$, where $\mathcal{P}_{\rm ph}$ denotes the operation of particle-hole conjugation. The anti-Pfaffian analog of the BS state has a shift of $\mathcal{S}^{\rm aPf-BS}=-3$, which is the same as that of the state given in Eq.~(\ref{eq: parton_3_8}). Moreover, the thermal Hall conductance of the anti-Pfaffian analog of the $3/8$ BS state is the same as that of the $\bar{3}\bar{2}^{2}1^{4}$ state. Thus these states likely describe the same topological order. This can be understood by noting that the $\bar{2}^{2}1^{3}$ state lies in the same universality class as the anti-Pfaffian~\cite{Balram18}. Unlike the parton and BS states, we do not know of an efficient way to evaluate quantities for the anti-Pfaffian analog of the BS state since the square of the anti-Pfaffian, unlike the Pfaffian, cannot be written in a simple form. Finally, we consider two-component states at $3/8$, where the two components could represent spin, valley, layer, subband, or orbital degrees of freedom. Besides the fully polarized state, the $\bar{3}\bar{2}^{2}1^{4}$ state admits the possibility of partially polarized and singlet states arising from the corresponding states at $\nu=-3$ and $\nu=-2$ respectively. On the other hand, the $3/8$ BS state only has fully polarized and partially polarized states which stem from the corresponding states at $\nu=-3$. It is possible that for certain interaction parameters the non-fully polarized states become the ground state. Recently, an experiment using the technique of spin-resolved pulsed tunneling has indicated the presence of non-fully spin-polarized states in the second Landau level~\cite{Yoo19}. \section{Conclusion} \label{sec: conclusions} Many remarkable concepts such as Majorana modes obeying non-Abelian braid statistics have emerged from the study of the FQHE at $5/2$. In this work, we looked at the only other even denominator filling factor in the SLL where experimentally an FQHE state has been well-established, namely $\nu=2+3/8$. We considered the non-Abelian state described by the $\bar{3}\bar{2}^{2}1^{4}$ wave function and showed it to be a viable candidate to capture the $2+3/8$ Coulomb ground state. Our analysis suggests that the $\bar{3}\bar{2}^{2}1^{4}$ and the $3/8$ Bonderson-Slingerland states are in close competition with each other at $2+3/8$. We also proposed experimental probes that can unambiguously distinguish the non-Abelian topological orders of the $\bar{3}\bar{2}^{2}1^{4}$ and the $3/8$ Bonderson-Slingerland states. \section*{Acknowledgments} We acknowledge useful discussions with Maissam Barkeshli, Jainendra K. Jain, Sutirtha Mukherjee, G. J. Sreejith, Arkadius\'z W\'ojs, and Andrea Young. Computational portions of this research work were conducted using the Nandadevi supercomputer, which is maintained and supported by the Institute of Mathematical Science's High-Performance Computing Center. Some of the numerical calculations were performed using the DiagHam package, for which we are grateful to its authors. \paragraph{Funding information} We thank the Science and Engineering Research Board (SERB) of the Department of Science and Technology (DST) for funding support via the Startup Grant No. SRG/2020/000154. \begin{appendix} \section{States of composite fermions carrying four vortices in the second Landau level} \label{sec: 4CF_SLL} The most prominent FQHE states belonging to the $^{4}$CF sequence, $\nu=n/(4n\pm 1)$, that have been observed in the SLL are at filling factors $\nu=2+1/5$ and $2+2/7$ and their particle-hole conjugates at $\nu=2+4/5$ and $\nu=2+5/7$~\cite{Pan08,Choi08,Kumar10,Zhang12,Reichl14}. In this Appendix, we present evidence to show that the $1/5$ and $2/7$ states are well-described by the $1/5$ Laughlin and $2/7$ Jain state respectively. This implies that the SLL states at $n/(4n\pm 1)$ and their particle-hole conjugates are analogous to their LLL counterparts. Since the Hilbert space of systems at these low-fillings is quite large, it is computationally expensive to study these systems for a wide variety of interactions using exact diagonalization. Thus, we shall focus our attention on the exact SLL Coulomb point and present only results obtained using the spherical pseudopotentials. In Table~\ref{tab:overlaps_1_5_Laughlin_LLL_SLL} we present the overlaps of the $1/5$ Laughlin state with the exact Coulomb ground state at 1/5 in the two lowest Landau levels. The exact Coulomb ground states at $1/5$ in the SLL has an overlap upwards of $0.92$ with the Laughlin state for up to $N=11$ (see also results of Refs.~\cite{Ambrumenil88, Kusmierz18}). We find that the overlaps of the Laughlin state in the SLL are comparable to the analogous numbers in the LLL. Moreover, the overlap between the LLL and SLL Coulomb ground state at $1/5$ is almost unity for all the systems considered in this work. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|} \hline $N$ & $2l$ & $|\langle \Psi^{\rm Laughlin}_{1/5}| \Psi^{\rm LLL}_{1/5} \rangle|$ & $|\langle \Psi^{\rm Laughlin}_{1/5}| \Psi^{\rm SLL}_{1/5} \rangle|$ & $|\langle \Psi^{\rm SLL}_{1/5}| \Psi^{\rm LLL}_{1/5} \rangle|$ \\ \hline 6 & 25 & 0.9486 & 0.9590 & 0.9993 \\ \hline 7 & 30 & 0.9768 & 0.9818 & 0.9996 \\ \hline 8 & 35 & 0.9589 & 0.9678 & 0.9992 \\ \hline 9 & 40 & 0.9334 & 0.9453 & 0.9992 \\ \hline 10 & 45 & 0.9228 & 0.9386 & 0.9987 \\ \hline 11 & 50 & 0.9413 & 0.9509 & 0.9993 \\ \hline \end{tabular} \caption{\label{tab:overlaps_1_5_Laughlin_LLL_SLL} Absolute value of the overlap of the exact Coulomb ground state at $\nu=1/5$ in the lowest Landau level (LLL), $|\Psi^{\rm LLL}_{1/5} \rangle$ and second Landau level (SLL), $|\Psi^{\rm SLL}_{1/5} \rangle$ with the $1/5$ Laughlin state $|\Psi^{\rm Laughlin}_{1/5} \rangle$ obtained in the spherical geometry for $N$ electrons at $2l=5N-5$. For comparison, in the last column, we have shown the overlap between the exact LLL and SLL states. The numbers in the third and fourth columns for up to $N=10$ were previously given in Refs.~\cite{Ambrumenil88,Kusmierz18}.} \end{table} In Table~\ref{tab:overlaps_2_7_Jain_LLL_SLL} we present the overlaps of the $2/7$ Jain state with the exact Coulomb ground state at 2/7 in the two lowest Landau levels. The $2/7$ Jain state was obtained by a brute-force projection of the unprojected state into the LLL. The exact Coulomb ground states at $2/7$ in the LLL and SLL has an overlap of $89\%$ or higher with the $2/7$ Jain state for up to $N=10$. Furthermore, the overlap between the LLL and SLL Coulomb ground state at $2/7$ is also $91\%$ or higher for all the systems considered in this work. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|} \hline $N$ & $2l$ & $|\langle \Psi^{\rm Jain}_{2/7}| \Psi^{\rm LLL}_{2/7} \rangle|$ & $|\langle \Psi^{\rm Jain}_{2/7}| \Psi^{\rm SLL}_{2/7} \rangle|$ & $|\langle \Psi^{\rm SLL}_{2/7}| \Psi^{\rm LLL}_{2/7} \rangle|$ \\ \hline 4 & 12 & 0.9999 & 0.9992 & 0.9996 \\ \hline 6 & 19 & 0.9964 & 0.9681 & 0.9833 \\ \hline 8 & 26 & 0.9989 & 0.9762 & 0.9819 \\ \hline 10 & 33 & 0.9678 & 0.8458 & 0.9403 \\ \hline 12 & 40 & $-$ & $-$ & 0.9119 \\ \hline \end{tabular} \caption{\label{tab:overlaps_2_7_Jain_LLL_SLL} Absolute value of the overlap of the exact Coulomb ground state at $\nu=2/7$ in the lowest Landau level (LLL), $|\Psi^{\rm LLL}_{2/7} \rangle$ and second Landau level (SLL), $|\Psi^{\rm SLL}_{2/7} \rangle$ with the $2/7$ Jain state $|\Psi^{\rm Jain}_{2/7} \rangle$ obtained in the spherical geometry for $N$ electrons at $2l=7N/2-2$. For comparison, in the last column, we have shown the overlap between the exact LLL and SLL states. We have not been able to construct the $2/7$ Jain state for $N=12$ electrons, so its overlap with the exact states is currently unavailable (indicated by $-$).} \end{table} Besides $1/5$ and $2/7$, and their particle-hole conjugates there are no FQHE states in the sequence $n/(4n\pm 1)$ that have been definitively established in the SLL. Some signatures of FQHE have been observed at $2+7/9$, which is the particle-hole conjugate of the $2+2/9$ state~\cite{Pan08, Kumar10}. Therefore, for completeness, in Table~\ref{tab:overlaps_2_9_Jain_LLL_SLL} we present the overlaps of the $2/9$ Jain state with the exact SLL Coulomb ground state at $2/9$. As with the $2/7$ Jain state, we obtain the $2/9$ Jain state by a brute-force projection of the unprojected state into the LLL. The exact Coulomb ground states at $2/9$ in the LLL and SLL have an overlap of about $97\%$ or higher with the $2/9$ Jain state for up to $N=10$. Moreover, the exact LLL and SLL Coulomb ground state at $2/9$ are almost identical to each other for all the systems considered in this work. All these results strongly suggest that the SLL states at $n/(4n\pm 1)$ and their particle-hole conjugates are analogous to their LLL counterparts. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|} \hline $N$ & $2l$ & $|\langle \Psi^{\rm Jain}_{2/9}| \Psi^{\rm LLL}_{2/9} \rangle|$ & $|\langle \Psi^{\rm Jain}_{2/9}| \Psi^{\rm SLL}_{2/9} \rangle|$ & $|\langle \Psi^{\rm SLL}_{2/9}| \Psi^{\rm LLL}_{2/9} \rangle|$ \\ \hline 4 & 12 & 0.9999 & 0.9992 & 0.9996 \\ \hline 6 & 21 & 0.9928 & 0.9892 & 0.9991 \\ \hline 8 & 30 & 0.9955 & 0.9928 & 0.9989 \\ \hline 10 & 39 & 0.9744 & 0.9766 & 0.9977 \\ \hline 12 & 48 & $-$ & $-$ & 0.9972 \\ \hline \end{tabular} \caption{\label{tab:overlaps_2_9_Jain_LLL_SLL} Absolute value of the overlap of the exact Coulomb ground state at $\nu=2/9$ in the lowest Landau level (LLL), $|\Psi^{\rm LLL}_{2/7} \rangle$ and second Landau level (SLL), $|\Psi^{\rm SLL}_{2/9} \rangle$ with the $2/9$ Jain state $|\Psi^{\rm Jain}_{2/9} \rangle$ obtained in the spherical geometry for $N$ electrons at $2l=9N/2-6$. For comparison, in the last column, we have shown the overlap between the exact LLL and SLL states. We have not been able to construct the $2/9$ Jain state for $N=12$ electrons, so its overlap with the exact states is currently unavailable (indicated by $-$).} \end{table} We now turn to the charge and neutral gaps obtained from exact diagonalization at $\nu=1/5,~2/7$, and $2/9$ in the SLL. In Fig.~\ref{fig: gaps_4CFs_SLL} we show the charge and neutral gaps at these three fillings in the two lowest LLs. For all the systems considered in this work, all three fillings support a finite charge and neutral gap in the two lowest LLs. However, since only a few systems are available, the estimated extrapolated gaps in many cases have large uncertainty. Interestingly, we find that the gaps at $1/5$ in the SLL are larger than the corresponding gaps in the LLL. This should be contrasted with the $1/3$ filling where the gap in the LLL is larger than that in the SLL~\cite{Balram20}. We note that the order of magnitude of the gaps at $\nu=1/5,~2/7$ and $2/9$ in the LLL and SLL are comparable to each other. \begin{figure*}[htpb] \begin{center} \includegraphics[width=0.32\textwidth,height=0.16\textwidth]{1_5_gaps_LLL_SLL.pdf} \includegraphics[width=0.32\textwidth,height=0.16\textwidth]{2_7_gaps_LLL_SLL.pdf} \includegraphics[width=0.32\textwidth,height=0.16\textwidth]{2_9_gaps_LLL_SLL.pdf} \caption{(color online) Thermodynamic extrapolations of the charge (red diamonds) and neutral (blue squares) gaps in the second Landau level (filled symbols) at $\nu=1/5$ (left panel), $2/7$ (center panel) and $2/9$ (right panel) obtained from exact diagonalization in the spherical geometry. The extrapolated gaps, obtained from a linear fit in $1/N$, are quoted in Coulomb units of $e^2/(\epsilon\ell)$ on the plot, with the error in the extrapolation shown in the parenthesis. For comparison we have also shown the corresponding lowest Landau level gaps (open symbols). The lowest Landau level charge gaps at $1/5$ and $2/9$ were previously given in Ref.~\cite{Zuo20}. } \label{fig: gaps_4CFs_SLL} \end{center} \end{figure*} To summarize, our results suggest that the nature of the states belong to the $^{4}$CF sequence in the two lowest Landau levels are similar. In particular, the topological properties of the $n/(4n\pm 1)$ states in the LLL and SLL are expected to be identical to each other. \section{Fractional quantum Hall effect at $\nu=2+6/13$: an update on the results of Ref.~\cite{Balram18a}} \label{sec: 6_13_parton} The $\bar{3}\bar{2}1^{3}$ state has been proposed as a candidate~\cite{Balram18a} to describe the experimentally observed FQHE at $\nu=2+6/13$~\cite{Kumar10}. In Ref.~\cite{Balram18a}, the $\bar{3}\bar{2}1^{3}$ state was constructed in real space for the smallest system of $N=12$ electrons and was compared against the ground states obtained from exact diagonalization of the SLL Coulomb as well as certain model interactions using the Monte Carlo method. These comparisons with exact states that were carried out using the real space representation of the parton state are time-consuming and computationally expensive. The Fock space representation readily allows a calculation of a state's overlap with ground states obtained from the exact diagonalization of various interactions. At the time of publication, it was not possible to obtain the Fock space representation of this parton state. We have now been able to obtain the Fock space representation of the $\bar{3}\bar{2}1^{3}$ state evaluated as $\Psi_{2/3}^{\rm Jain}\Psi_{3/5}^{\rm Jain}/\Phi_{1}$ for $N=12$ electrons at $2l=28$. We shall present some results obtained from it in this Appendix. We note that results obtained from exact diagonalization for the next system size of $N=18$ electrons were shown in the supplemental material of Ref.~\cite{Balram20}. However, due to the prohibitively large Hilbert space dimension, it has not been possible to obtain the Fock space representation of the $\bar{3}\bar{2}1^{3}$ state for $N=18$. To evaluate the Fock space representation, i.e., expansion coefficients in the $L_{z}=0$ basis, of the desired state (which can be evaluated in real space), we follow the method outlined in Refs.~\cite{Sreejith11, Balram18}. Since our desired state is uniform, we first calculate all the $L=0$ states of the system of interest. To obtain all the $L=0$ states, we evaluate a sufficient number of $L=0$ states by starting with random initial vectors in the $L_{z}=0$ basis and Lanczos diagonalizing the $L^{2}$ operator. We then Gram-Schmidt orthogonalize the states obtained in the previous step to get a complete set of orthonormal $L=0$ states. Once the set of $L=0$ states is obtained, we evaluate them as well as the desired state at sufficiently many (a few times the dimension of the $L=0$ subspace) configurations $\{z_{k}\}$ to obtain a set of linear equations, which we then solve by the least-squares method (using the procedure of iterative refinement~\cite{Wilkinson94} implemented in the ALGLIB package~\cite{Alglib}) to obtain the expansion coefficients. Depending on the chosen sets of $\{z_{k}\}$, the solution to the linear equations obtained can be numerically unstable, which is why we solve an over-determined system of equations. Note that each configuration $\{z_{k}\}$ gives two equations, one for the real part and one for the imaginary part since the expansion coefficients are chosen to be real. Instead of choosing the configurations completely at random, we find it useful to run a Monte Carlo with the desired state and picking configurations only after the Monte Carlo has thermalized. \begin{figure}[htpb] \begin{center} \includegraphics[width=0.47\textwidth,height=0.23\textwidth]{N_12_bar3bar2111_2_5_BS_SLL_finite_width_overlaps.pdf} \caption{(color online) The overlap of the $\bar{3}\bar{2}1^{3}$ state (blue squares) with the exact second Landau level Coulomb ground state evaluated in the spherical geometry using the disk (open symbols) and spherical (filled symbols) pseudopotentials for various widths $w$ for $N=12$ electrons at $2l=28$. This system aliases with the $2/5$ Bonderson-Slingerland state; therefore, for comparison, we have also shown its overlaps (red diamonds) with the exact second Landau level Coulomb ground states.} \label{fig: overlaps_6_13_SLL_bar3bar2111} \end{center} \end{figure} In Fig.~\ref{fig: overlaps_6_13_SLL_bar3bar2111} we show the overlap of the exact second LL Coulomb ground states obtained using the disk and spherical pseudopotentials with the $\bar{3}\bar{2}1^{3}$ state for a system of $N=12$ electrons at $2l=28$ for $w\leq 10\ell$. We find that the $\bar{3}\bar{2}1^{3}$ state has a reasonable overlap with the exact SLL Coulomb ground state for all the widths considered. Furthermore, as the well-width $w$ is increased, the overlap of the $\bar{3}\bar{2}1^{3}$ state with the exact SLL Coulomb ground state increases which indicates that increasing well-width enhances the stability of the $\bar{3}\bar{2}1^{3}$ state. These results are consistent with previous results of Ref.~\cite{Balram18a}, where the effect of the finite well-width was modeled by the Zhang-DasSarma interaction~\cite{Zhang86}. The system of $N=12$ electrons at $2l=28$ aliases with the $2/5$ Bonderson-Slingerland state (BS)~\cite{Bonderson08} which has been put forth as a candidate for the experimentally observed $12/5$ FQHE~\cite{Bonderson12}. Therefore, for completeness, in Fig.~\ref{fig: overlaps_6_13_SLL_bar3bar2111} we have also shown the overlap of the $2/5$ BS state with the exact second LL Coulomb ground states. Consistent with previous results of Ref.~\cite{Bonderson12}, we find that the $2/5$ BS state has a good overlap with the exact SLL Coulomb ground state. Next, we present results on the charge and neutral gaps of $2+6/13$. For the gap calculations, we have only been able to access the system of $N=12$ electrons using exact diagonalization. The neutral gap is evaluated by taking the energy difference between the two lowest-energy states of $N=12$ electrons at $2l=28$. To calculate the charge gap, we use Eq.~(\ref{eq: charge_gap}) with $n_{q}=6$ since the insertion of a single flux quantum in the $\bar{3}\bar{2}1^{3}$ state produces six fundamental quasiholes each of charge $e/13$~\cite{Balram18a}. The neutral and charge gaps for $N=12$, evaluated using exact diagonalization with both the spherical and disk pseudopotentials, are about $0.02$ $e^2/(\epsilon\ell)$ and $0.002$ $e^2/(\epsilon\ell)$ respectively at width, $w=0$ and decrease with increasing $w$ as shown in Fig.~\ref{fig: gaps_6_13_SLL_bar3bar2111}. These gaps are smaller than the corresponding gaps at $7/3$, which suggests that the $2+6/13$ state is more fragile compared to the prominent $7/3$ state~\cite{Balram20}. As pointed above, the system of $N=12$ at $2l=28$ aliases with the $2/5$ BS state. The neutral gap corresponding to the $2/5$ BS state is identical to that shown in Fig.~\ref{fig: gaps_6_13_SLL_bar3bar2111}. However, the charge gap corresponding to the $2/5$ BS state is larger by a factor of three compared to that shown in Fig.~\ref{fig: gaps_6_13_SLL_bar3bar2111} since the insertion of a single flux quantum in the $2/5$ BS state produces $n_{q}=2$ fundamental quasiholes each of charge $e/5$~\cite{Bonderson12}. \begin{figure}[htpb] \begin{center} \includegraphics[width=0.47\textwidth,height=0.23\textwidth]{N_12_bar3bar2111_SLL_finite_width_gaps.pdf} \caption{(color online) The second Landau level charge (red diamonds) and neutral (blue squares) gaps for $N=12$ electrons at the shift corresponding to the $\bar{3}\bar{2}1^{3}$ state at $\nu=6/13$ evaluated in the spherical geometry using the spherical (filled symbols) and disk pseudopotentials (open symbols) for various well-widths $w$.} \label{fig: gaps_6_13_SLL_bar3bar2111} \end{center} \end{figure} In summary, our results suggest that the $\bar{3}\bar{2}1^{3}$ state gives a good description of the $2+6/13$ Coulomb ground state. Encouragingly, we find that the $\bar{3}\bar{2}1^{3}$ state is fairly robust to perturbations stemming from the finite-width of the quantum well. \section{Fractional quantum Hall effect at $\nu=2+3/7$: an update on the results of Ref.~\cite{Faugno20a}} \label{sec: 3_7_parton} The $\bar{3}^{2}1^{3}$ state has been proposed as a candidate~\cite{Faugno20a} to describe an FQHE that could arise at $\nu=2+3/7$. As yet, FQHE has not been definitely established at $3/7$ in the second Landau level, though signatures of it have been seen in experiments~\cite{Choi08}. In Ref.~\cite{Faugno20a}, the $\bar{3}^{2}1^{3}$ state was constructed in Fock space for only the smallest system of $N=9$ electrons. Using the method outlined in Appendix~\ref{sec: 6_13_parton}, we have now been able to obtain the Fock space representation of the $\bar{3}^{2}1^{3}$ state, evaluated as $[\Psi_{3/5}^{\rm Jain}]^{2}/\Phi_{1}$, for the next system size of $N=12$ electrons at $2l=31$. We shall present some results obtained from it in this Appendix. We note that the state at $\nu=3/7$ for sufficiently short-range interactions and the long-range Coulomb interaction in the LLL and the $n=1$ LL of monolayer graphene, is the Abelian $\Psi_{3/7}^{\rm Jain}$ CF state~\cite{Jain07, Balram15c, Yang19a, Andrews20}. \begin{figure}[htpb] \begin{center} \includegraphics[width=0.47\textwidth,height=0.23\textwidth]{N_12_bar3bar3111_3_8_BS_SLL_finite_width_overlaps.pdf} \caption{(color online) The overlap of the $\bar{3}^{2}1^{3}$ state (blue squares) with the exact second Landau level Coulomb ground state evaluated in the spherical geometry using the disk (open symbols) and spherical (filled symbols) pseudopotentials for various widths $w$ for $N=12$ electrons at $2l=28$. This system aliases with the $3/8$ Bonderson-Slingerland state; therefore, for comparison, we have also shown its overlaps (red diamonds) with the exact second Landau level Coulomb ground states.} \label{fig: overlaps_3_7_SLL_bar3bar3111} \end{center} \end{figure} In Fig.~\ref{fig: overlaps_3_7_SLL_bar3bar3111} we show the overlap of the exact second LL Coulomb ground states obtained using the disk and spherical pseudopotentials with the $\bar{3}^{2}1^{3}$ state for a system of $N=12$ electrons at $2l=31$ for $w\leq 10\ell$. We find that the $\bar{3}^{2}1^{3}$ state has a reasonably high overlap with the exact SLL Coulomb ground state for all the widths considered. The system of $N=12$ electrons at $2l=31$ aliases with the $3/8$ Bonderson-Slingerland state (BS)~\cite{Bonderson08} which was discussed in detail in the main text. Therefore, for completeness, in Fig.~\ref{fig: overlaps_3_7_SLL_bar3bar3111} we have also shown the overlap of the $3/8$ BS state with the exact second LL Coulomb ground states. Consistent with previous results of Ref.~\cite{Hutasoit16}, that considered only the exact SLL Coulomb point in the spherical geometry, we find that the $3/8$ BS state has a good overlap with the exact SLL Coulomb ground state. Next, we present results on the charge and neutral gaps for the system of $N=12$ electrons. The neutral gap is evaluated by taking the energy difference between the two lowest-energy states of $N=12$ electrons at $2l=31$. To calculate the charge gap, we use Eq.~(\ref{eq: charge_gap}) with $n_{q}=3$ since the insertion of a single flux quantum in the $\bar{3}^{2}1^{3}$ state produces three fundamental quasiholes each of charge $e/7$~\cite{Faugno20a}. The neutral and charge gaps for $N=12$, evaluated using exact diagonalization with both the spherical and disk pseudopotentials, are about $0.015$ $e^2/(\epsilon\ell)$ and $0.001$ $e^2/(\epsilon\ell)$ respectively at width, $w=0$ and decrease with increasing $w$ as shown in Fig.~\ref{fig: gaps_3_7_SLL_bar3bar3111}. These gaps are smaller than the corresponding gaps at experimentally observed fractions in the SLL, which suggests that the $2+3/7$ state is quite fragile. As we mentioned in the previous paragraph, the system of $N=12$ at $2l=31$ aliases with the $3/8$ BS state. The neutral gap corresponding to the $3/8$ BS state is identical to that shown in Fig.~\ref{fig: gaps_3_7_SLL_bar3bar3111}. However, the charge gap corresponding to the $3/8$ BS state is smaller by a factor of two compared to that shown in Fig.~\ref{fig: gaps_3_7_SLL_bar3bar3111} since the insertion of a single flux quantum in the $3/8$ BS state produces $n_{q}=6$ fundamental quasiholes each of charge $e/16$~\cite{Hutasoit16}. \begin{figure}[htpb] \begin{center} \includegraphics[width=0.47\textwidth,height=0.23\textwidth]{N_12_bar3bar3111_SLL_finite_width_gaps.pdf} \caption{(color online) The second Landau level charge (red diamonds) and neutral (blue squares) gaps for $N=12$ electrons at the shift corresponding to the $\bar{3}^{2}1^{3}$ state at $\nu=3/7$ evaluated in the spherical geometry using the spherical (filled symbols) and disk pseudopotentials (open symbols) for various well-widths $w$.} \label{fig: gaps_3_7_SLL_bar3bar3111} \end{center} \end{figure} In summary, our results encouragingly suggest that the $\bar{3}^{2}1^{3}$ state gives a good description of the $2+3/7$ Coulomb ground state observed in numerics. However, we find that the $\bar{3}^{2}1^{3}$ state has a very small to vanishing charge gap. This suggests that the FQHE at $\nu=2+3/7$ is very delicate state and provides a clue as to why it has not been definitively established in experiments. \end{appendix}
proofpile-arXiv_059-15736
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} \subsection{Background and Literature Review} Wireless vehicular network (WVN) has been exponentially evolving by taking advantage of sophisticated technologies such as artificial intelligence (AI) \cite{ai}, machine learning \cite{ml}, Millimeter Wave (mmWave) \cite{eJEns_Mmwave,eRelayFSO3, mmw,eAppr,mmwArx }, and 5G \cite{5g,eRelayFSO2,eRelayFSO1}, etc. It offers a variety of advanced features like real-time alerting messages \cite{realTime}, cloud services \cite{cloud, eVCloud}, etc. However, as any communication network, WVN is subject to many challenges, especially the security issue which is a peer parameter that guarantees a certain acceptable quality of services (QoS). \\ Several research papers discussed the security challenges and proposed a variety of solutions from different OSI (Open Systems Interconnection model) layers perspective \cite{generalSec1,generalSec2}. Regarding the physical layer security (PLS), it has been proven that securing the communication at this level is an efficient scheme to deal with passive threats like the eavesdropping attacks. In this kind of menace, the malicious network entity is able to listen to a private communication by intercepting the transmitted signal and revealing the confidential information.\let\thefootnote\relax\footnotetext{ This work was supported in part by the U.S. National Science Foundation (NSF) under the grant CNS-1650831.} What makes the eavesdropping attacks very critical is the fact that it is difficult to be detected by the victims since it could be processed without leaving any traces. To deal with this security dilemma, some related works proposed the use of artificial noise (AN) where the transmitter perturbs the attacker channel by sending a dedicated signal \cite{an1,an2}. This signal is generated orthogonally to the main link between the legitimate entities so that it will affect only the eavesdropper link. Other papers studied the employment of a friendly jammer (J) which has the responsibility of jamming the eavesdroppers' channels instead of the transmitter \cite{jam1,jam2}. \\ In general, by employing a jammer $J$, as a third network entity, could be more efficient in the case of multiple eavesdroppers attacks. In other words, if the network is subject to several attacks, it is better to have a jammer node to deal with all the attackers rather than each communicating node deals with all the attackers by itself (it will be very expensive in terms of power if each transmitter will dedicate a fraction of its power to jam all the eavesdroppers' channels). \subsection{Our Contribution} In our paper, we are focusing on using the friendly jammer to protect V2I communications. As a channel fading, we adopt the Double Shadowed $\kappa$-$\mu$ Fading Model, noted by $\mathcal{D}(\cdot)$, recently presented in \cite{channel}, which is more general and covers wide ranges of fading such as double shadowed Rice, Rician shadowed, Nakagami-q, Nakagami-m, Rayleigh, one-sided Gaussian, etc. We highlight our contributions as follows: \begin{enumerate} \item[$\bullet$] We propose the use of friendly jammer $J$ in V2I communications under an eavesdropping attack scenario. \item[$\bullet$] We adopt the new channel model $\mathcal{D}(\cdot)$. \item[$\bullet$] We derive a closed form expression for the ergodic capacity at the receiver under $\mathcal{D}(\cdot)$ model. \item[$\bullet$] We derive a closed form expression for cumulative distribution function (CDF) of the signal-to-interference-plus-noise-ratio (SINR) and the ergodic capacity at the eavesdropper while considering Nakagami-m as special case of $\mathcal{D}(\cdot)$. \item[$\bullet$] We examine the impact of the blockage density at the receiver by adopting the special case model Rician shadowed. \end{enumerate} \subsection{Paper Structure} The paper is constructed as follows: Section \RN{2} introduces the system model. Section \RN{3} studies the outage probability while Section \RN{4} examines the ergodic capacity and the secrecy capacity. Then, Section \RN{5} evaluates the performance of the security approach based on numerical results. Finally, we outline our conclusion in Section \RN{6}. \section{System Model} \subsection{Vehicular Communications and Attack Model} \begin{figure}[H] \includegraphics[height=55mm, width=\linewidth]{attackmodel.PNG} \caption{V2I communications in the presence of multiple eavesdroppers. } \end{figure} In V2I communications, base stations (BSs) and vehicles should be able to exchange information and data securely. However, the transmitted signals may be subject to an intended overhearing where an eavesdropper intercepts the signal and reveals the secret messages. In Fig. 1, we have a typical model where the attacker \textit{E} is listening to the communication between the source $S$ (the base station) and the legitimate receiver $R$. In this case, $J$ tends to protect the network by sending AN to the attacker $E$, while $R$ is immune since the AN is orthogonal to its channel. \subsection{Channel Model} The received signal at \textit{R} and \textit{E} are respectively: \begin{equation} \begin{aligned} { y_{_R}}& = \sum_{n=1}^{N}h_{_{S,R;n}}x_{_I} + \sum_{k=1}^{K}h_{_{J,R;k}}x_{_J} +w_{_R}\\ &= \sum_{n=1}^{N}h_{_{S,R;n}}x_{_I} +w_{_R}, \end{aligned} \end{equation} \begin{equation} \begin{aligned} { y_{_E}}& =\sum_{n=1}^{N}h_{_{S,E;n}}x_{_I} + \sum_{k=1}^{K}h_{_{J,E;k}}x_{_J} +w_{_E}, \end{aligned} \end{equation} where the channel model parameters are defined in Table I. \begin{table}[h] \caption{ Channel model parameters description} \begin{tabular}{|c|c|} \hline \rowcolor[HTML]{FFCE93} Parameters & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCE93}Description} \\ \hline N & \multicolumn{1}{c|}{Number of antennas at the BS} \\ \hline \rowcolor[HTML]{EFEFEF} K & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}Number of antennas at the Jammer \textit{J }} \\ \hline $h_{_{a,b;c}}$ & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}The fading amplitude of the channel corresponding\\ to the link between the antenna $c$ of the node $a$\\ and the receiving node $b$.\\ $a\in\{J,S\}$ and $b\in\{R,E\}$\end{tabular}} \\\hline \cline{2-2} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}$g_{_{a,b}}$ is the channel gain where $|g_{a,b}|^2$ $\sim \mathcal{D}(\cdot)$} \\ \cline{2-2} & \multicolumn{1}{c|}{\begin{tabular}[]{@{}c@{}}$r_{_{a,b;c}}$ is the distance between the antenna $c$ \\of the node $a$ and the receiving node $b$\end{tabular}} \\ \cline{2-2} \multirow{-4}{*}{\begin{tabular}[c]{@{}c@{}}$h_{_{a,b;c}}$\\ $=g_{_{a,b;c}}\sqrt{r_{_{a,b;c}}^{-\delta}}$\end{tabular}} & \cellcolor[HTML]{EFEFEF}$\delta$ is path loss exponent \\ \hline \rowcolor[HTML]{EFEFEF} $x_{_I}$ & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\begin{tabular}[]{@{}c@{}}The confidential information signal\\ sent by $S$ with power $P_{_S}$/antenna \end{tabular}} \\ \hline $x_{_J}$ & \multicolumn{1}{c|}{\begin{tabular}[]{@{}c@{}}jamming signal (AN) emitted by $J$\\ with power $P_{_J}$/antenna \end{tabular}} \\ \hline \rowcolor[HTML]{EFEFEF} $w_{_b}$ & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\begin{tabular}[]{@{}c@{}}The additive white Gaussian noise (AWGN) \\at the node $b$ with variance ${\sigma_{w_{_b}}^2}$ \end{tabular}} \\ \hline \end{tabular} \end{table} As we can deduce from Eq. (1), $\sum_{k=1}^{K}h_{_{J,R}}x_{_J}=0$, which means that the jamming signal will only affect the attacker while conserving the same signal at the legitimate receiver.\\ The received SNR and SINR at $R$ and $E$, respectively, are \begin{equation} \begin{aligned} { \gamma_{_R}}=\frac{ \sum_{n=1}^{N}P_{_S}|g_{_{S,R}}|^{2}r_{_{S,R;n}}^{-\delta} }{\sigma_{w_{_R}}^2} = \sum_{n=1}^{N}\gamma_{_{R;n}}, \end{aligned} \end{equation} \begin{equation} \begin{aligned} { \gamma_{_E}} =&\frac{\sum_{n=1}^{N}P_{_S}|h_{_{S,E;n}}|^{2} }{\sum_{k=1}^{K}P_{_{J}}|h_{_{J,E;k}}|^{2}+{\sigma_{w_{_E}}^2}}=\frac{ \frac{\sum_{n=1}^{N}P_{_S}|g_{_{S,E;n}}|^{2}r_{_{S,E;k}}^{-\delta}}{\sigma_{w_{_E}}^2}} {1+\frac{\sum_{k=1}^{K}P_{_{J}}|g_{_{J,E;k}}|^{2}r_{_{J,E;k}}^{-\delta}}{{\sigma_{w_{_E}}^2}}}\\&=\frac{ \sum_{n=1}^{N}\gamma{_{_{I;n}}} }{1+\sum_{k=1}^{K}\gamma{_{_{J;k}}}}=\frac{ \gamma{_{_{I}}} }{1+\gamma{_{_{J}}}}, \end{aligned} \end{equation} where $\gamma_{_{R;n}} $ $\sim$ $\mathcal{D}(\cdot)$ and $\gamma{_{_{I;n}}}$ $\sim$ $\mathcal{D}(\cdot)$ are, respectively, the SNRs corresponding to the confidential signal received at $R$ and $E$ via the $n$-th antenna, while $\gamma{_{_{J;k}}}$ $\sim$ $\mathcal{D}(\cdot)$ is the SNR due to the jamming signal sent by $J$ at $E$ via the $k$-th antenna. The general probability density function (PDF) and CDF of the random variable (RV) $\gamma_{_l}$, where $ l\in \{(I;n), (J;k), (R;n) \}$, are respectively \begin{equation} \begin{aligned} { f}_{\gamma_{_l}}(\gamma) &=\frac{(s_{_l}-1)^{s_{_l}}c_{_l}^{c_{_l}}T_{_l}^{\mu_{_l}}\gamma^{\mu_{_l}-1}\overline{\gamma}^{s_{_l}}}{(c_{_l}+\mu_{_l}\kappa_{_l})^{c_{_l}}B(s_{_l},\mu_{_l})(T_{_l}\gamma+(s_{_l}-1) \overline{\gamma})^{s_{_l}+\mu_{_l}}}\\ &\times_{2}F_{1}\left( c_{_l}, s_{_l}+\mu_{_l};\mu_{_l}; \frac{K_{_l}\mu_{_l}\kappa_{_l}\gamma}{T_{_l}\gamma+(s_{_l}-1)\overline{\gamma}}\right), \end{aligned} \end{equation} \begin{equation} \begin{aligned} &{ F}_{\gamma_{_l}}(\gamma) =\left( \frac{c_{_l}}{c_{_l}+\kappa_{_l}\mu_{_l}}\right)^{c_{_l}} \left( \frac{T_{_l}\gamma}{\overline{\gamma}(s_{_l}-1)}\right)^{\mu_{_l}} \sum_{i=0}^{\infty} \left(\frac{K_{_l}\mu_{_l}\kappa{_l}\gamma}{(s_{_l}-1)\overline{\gamma}}\right)^i \\& \frac{(c_{_l})_i+(i+\mu_{_l})_{s_{_l}}}{i!\Gamma(s_{_l})(i+\mu_{_l})} {}_{2}F_{1}( i+\mu_{_l},i+\mu_{_l}+ s_{_l};i+\mu_{_l}+1;\mu_{_l}; \tau), \end{aligned} \end{equation} where $_{2}F_{1}(\cdot,\cdot;\cdot;.)$ represents the Hypergeometric function, $B(\cdot,\cdot)$ denotes the Beta function, $\tau=\frac{-T_{_l}\gamma}{\overline{\gamma}(s_{_l}-1)}$ and $(x)_{i}=\frac{\Gamma(x+i)}{\Gamma(x)}$ is the the Pochhammer symbol. For the sake of organization, the parameters on which depends the distribution of $\gamma_{_l}$, are presented in Table. II. \begin{table}[h] \caption{ $\mathcal{D}(\cdot)$ parameters description\cite{channel}} \begin{tabular}{|r|c|c|} \hline \rowcolor[HTML]{FFCE93} \multicolumn{1}{|c|}{\cellcolor[HTML]{FFCE93}SNR} & Parameters & Description \\ \hline & $c_{_l}$ & Shape of the Nakagami-m RV \\ \cline{2-3} & \cellcolor[HTML]{EFEFEF}$s_{_l}$ & \cellcolor[HTML]{EFEFEF}Shape of the inverse of Nakagami-m RV \\ \cline{2-3} & $\mu_{_l}$ & Number of multipath clusters \\ \cline{2-3} & \cellcolor[HTML]{EFEFEF}$\kappa_{_l}$ & \cellcolor[HTML]{EFEFEF}\begin{tabular}[]{@{}c@{}}The ratio of the total power of the dominant\\ components to the scattered waves\end{tabular} \\ \cline{2-3} & $T_{_l}$ & $\mu_{_l}(1+\kappa_{_l})$ \\ \cline{2-3} \multirow{-6}{*}{\begin{tabular}[c]{@{}r@{}}$\gamma_{_l} $\end{tabular}} & \cellcolor[HTML]{EFEFEF}$K_{_l}$ & \cellcolor[HTML]{EFEFEF} $\frac{K_{_l}}{(c_{_l}+\mu_{_l}*\kappa_{_l})}$ \\ \hline \end{tabular} \end{table} \section{Outage Probability Analysis} \subsection{Outage Probability at the Legitimate Receiver } To analyze the outage probability at $R$, we have to refer to the corresponding CDF. We know that $\gamma_{_{R;n}} $ $\sim$ $\mathcal{D}(\cdot)$, hence its CDF is given by Eq. (6). However, the closed-form expression of the CDF corresponding of $\sum_{n=1}^{N}\gamma_{_{R;n}} $ is not tractable. Therefore, we are assuming that N = 1, which make the outage probability of $\gamma_{_{R}}$ represented by Eq. (6). At this stage, we are referring to the special fading case Rician shadowed under the following substitutions: $c_{R}\rightarrow\infty$ and $\mu = 1$ \cite{channel}. Therefore, the CDF is expressed as follows \cite{ShadRicianCDF} \begin{equation} \begin{aligned} { F}_{\gamma_{_R}}(\gamma) = \frac{1}{\Gamma(m)}\left(\frac{m}{\Xi}\right)^m \sum_{i=0}^{\infty}\frac{\Gamma(m+i)\gamma\left(i+1,\frac{\gamma}{\overline{\gamma}2\sigma^2}\right)}{\sigma^{2i}2^{i}i!\Gamma(1+i)\left(\frac{1}{2\sigma^{2}}+\frac{m}{\Xi}\right)^{m+i}}, \end{aligned} \end{equation} where $\gamma(\cdot,\cdot)$ is the Incomplete Gamma function, $\Xi$ is the average power of the line of sight (LOS) component, $2\sigma^2$ is the average power of the scatter component, $m$ is fading figure which represents the fading severity. Therefore, the outage related to the density of shadowing, $F_{\gamma_{_{SD}}}$ could be presented by \begin{equation} \begin{aligned} F_{\gamma_{_{SD}}}(\gamma) = p_{_{los}}F_{\gamma_{_{SD}}}^{_{los}}(\gamma) +(1-p_{_{los}})F_{\gamma_{_{SD}}}^{_{nlos}}(\gamma) , \end{aligned} \end{equation} where $F_{\gamma_{_{SD}}}^{_{los}}(\gamma)$ and $F_{\gamma_{_{SD}}}^{_{nlos}}(\gamma)$ are the CDFs of the SNR evaluated when the link is LOS, and non-line-of-side (NLOS), respectively. \subsection{Outage Probability at the Attacker } It is complex to derive a closed-form expression of the CDF at the eavesdropper $E$. However, we can take advantage of the $\mathcal{D}(\cdot)$ generality by considering special channel model cases by manipulating its parameters. We suggest that $h_{S,E}$ and $h_{J,E}$ follow Nakagami-m distribution by fixing $c_{_R}=s_{_R}=\infty$, $\mu_{_I}=1$ and $\kappa_{_I}=m$\cite{channel}. Hence, $\gamma_{_{I;n}}$ and $\gamma_{_{J;k}}$ (see Eq. (4) ) are described by Gamma distribution ($\gamma_{_{I;n}} \sim\textit{\textsf{G}}(\nu_{_{I,n}},\beta_{_{I,n}})$ and $\gamma_{_{J;k}} \sim\textit{\textsf{G}}(\nu_{_{J,k}},\beta_{_{J,k}})$ ) , which has the following PDF and CDF \begin{equation} \begin{aligned} { f}_{\gamma_{_d}}(\gamma) =\frac{\beta_{_d}^{\nu_{_d}}\gamma^{\left(\nu_{_d}-1\right)}\exp({-\beta_{_d}\gamma})}{\Gamma(\nu_{_d})}, \end{aligned} \end{equation} \begin{equation} \begin{aligned} { F}_{\gamma_{_d}}(\gamma) & =1-\frac{\Gamma(\nu_{_d},\beta_{_d}\gamma) }{\Gamma(\nu_{_d})}, \end{aligned} \end{equation} where $d\in\{(I;n),(J;k)\}$, $\beta_{_h}$ and $\nu{_h}$ are, respectively, the scale and the shape parameters. Each antenna of the BS has the same distance from the attacker because the antennas are collocated. Therefore the parameter $\beta_{_{I,n}}$ is the same for all antennas $n\in {1,...,N}$ ($\beta{_{I}}$=$\beta_{_{I;n}}$) since $\beta_{_{S,E;n}}=\frac{ P_{_S} r_{_{S,E;n}}^{-\delta}}{\sigma_{w_{_E}}^2}$. We assume that all \textit{N} links have the same scale parameter, which means $\nu{_{I}}$=$\sum_{n=1}^{N}\nu_{_{I;n}}$. Therefore $\sum_{n=1}^{N} \gamma_{_{I;n}} = \gamma_{_{I}}$ $\sim$ $\textit{\textsf{G}}(\nu_{_{I}},\beta_{_{I}})$. The same analogy applied to the signal issued by the jammer where $\sum_{k=1}^{K} \gamma_{_{J;k}} = \gamma_{_{J}}$ $\sim$ $\textit{\textsf{G}}(\nu_{_{J}},\beta_{_{J}})$. For the sake of mathematical simplicity, we assume that $\nu_{_h}$ is a positive integer. Accordingly, the CDF has the subsequent series expansion as: \begin{equation} \begin{aligned} { F}_{\gamma_{_d}}(\gamma) =1- \sum_{n=0}^{\nu_d-1} \frac{(\beta_{_d}\gamma)^{n} e^{-\beta_{_d}\gamma}}{n!} . \end{aligned} \end{equation} Before deriving the CDF at $E$ following the aforementioned special cases, we should mention that we verified and proved obtaining Nakagami-m distribution by using the general PDF of the envelope corresponding to $\mathcal{D}(\cdot)$ and substituting the appropriate parameters (Please refer to the Appendix).\\ By referring to Eq. (4), Eq. (9), and Eq. (11), the CDF at the attacker can be derived using the following expression \begin{equation} \begin{aligned} &F_{\gamma_{E}}=\int\limits_{0}^{\infty}F_{\gamma_{_I}}(\gamma[1+\gamma_{_J}])f_{\gamma_{_J}}(\gamma_{_J})d\gamma_{_J}\\ &=1-\frac{e^{-\beta_{_I}\gamma}{\beta_{_J}}^{\nu_{_J}}} {\Gamma(\nu_{_J})}\sum_{n=0}^{\nu_{_I}-1}\sum_{q=0}^{i}{n\choose q}\int\limits_{0}^{\infty}\frac{\gamma_{_J}^{\Omega-1} e^{-\gamma_{_J}(\beta_{_I} \gamma+\beta_{_J})}}{n!(\beta_I\gamma)^{-n} }d\gamma_{_J} \end{aligned} \end{equation} where $\Omega=q+\nu_{_J}$.\\ Then, by referring to [\citenum{Tab}, Eq (3.351.3)], we obtain \begin{equation} \begin{aligned} { F}_{\gamma_{_E}}(\gamma) &=1-\frac{e^{-\beta_{_I}\gamma}{\beta_{_J}}^{\nu_{_J}}} {\Gamma(\nu_{_J})}\sum_{n=0}^{\nu_{_I}-1}\sum_{q=0}^{i}{n\choose q}\frac{\Gamma(\Omega)(\beta_{_I}\gamma+\beta_{_J})^{-\Omega}}{n!(\beta_I\gamma)^{-n}} \end{aligned} \end{equation} \section{Average Secrecy Capacity Analysis} The generalized formula of the ergodic capacity for a given SNR $\gamma$ is expressed by \begin{equation} \begin{aligned} \overline{C}_{p}&= \mathbb{E}\left[\log_{2}(1+\gamma)\right]=\int_{0}^{\infty} \log_{2}(1+\gamma)f_{\gamma_{p}}(\gamma)d\gamma\\ &=\frac{1}{\log(2)}\int_{0}^{\infty} \frac{\overline{F}_{\gamma_{p}}(\gamma)}{1+\gamma} d\gamma, \end{aligned} \end{equation} where $p\in\{L,E\}$ and $\overline{F}_{\gamma_{p}}$ is the complementary of the CDF.\\ The general average secrecy capacity $\overline{C}_{_S}$ can be defined by \begin{equation} \overline{C}_{s} = \begin{cases} \overline{C}_{_R}-\overline{C}_{_{E}}, & \mbox{if } \gamma_{_R}> \gamma_{_E} \\ 0, & \mbox{if } \gamma_{_R}< \gamma_{_E} \end{cases} \end{equation} where $\overline{C}_{_L}$ is the average capacity of the main link (between $S$ and $R$) and $\overline{C}_{_E}$ is the average capacity at the eavesdropper $E$. \subsection{Ergodic Capacity at the Legitimate Receiver} By referring to Eqs. (5, 6, and 14), the ergodic capacity at the legitimate receiver can be expressed by: \begin{equation} \begin{aligned} \overline{C}_{R} =& \int\limits_{0}^{\infty}\frac{[(s_{_R}-1)\overline{\gamma}_{_L}]^{s_{_R}}T_{_R}^{\mu_{_{R}}}}{\log(2)B(s_{_R},\mu_{_{R}} )} \left(\frac{c_{_R}}{c_{_R}+\mu_{_{R}}\kappa_{_I})}\right)^{c_{_R}}\\ &\times \sum_{i=0}^{\infty} \frac{({c_{_R}})_{i} (s_{_R} + \mu_{_{R}})_{i} (K_{_R}\mu_{_{R}} \kappa_{_I})^{i} }{i!(\mu_{_{R}})_{i} } \\ &\times \frac{\log(1+\gamma)\gamma^{\mu_{_{R}}+i-1}}{K\gamma +[(s_{_R}-1)\overline{\gamma}_{_L}]^{i+s_{_R}+\mu_{_{R}}}}d\gamma. \end{aligned} \end{equation} To find a closed form of the aforementioned expression, we rewrite the following expressions as follow \cite{eTransG} \begin{equation} \begin{aligned} \frac{1}{(T\gamma +\Phi)^{\eta}}=\frac{1}{\Phi^{\eta} \Gamma(\eta)}G_{1,1}^{1,1} \left( \frac{T}{\Phi} \bigg| \begin{matrix} 1-\eta\\ 0 \end{matrix} \right) \end{aligned} \end{equation} \begin{equation} \begin{aligned} \log(1+\gamma)=G_{2,2}^{1,2} \left( \gamma \bigg| \begin{matrix} 1,\\ 1, \end{matrix} \begin{matrix} 1\\ 0 \end{matrix} \right), \end{aligned} \end{equation} where $G^{m,n}_{p,q} \left( \begin{matrix} - | (\cdot,\cdot) \end{matrix}\right)$ is the Meijer G-Function, $\Phi=(s_{_R}-1)\overline{\gamma}_{_L}, \alpha = \mu_{_{R}}+i,$ and $\eta = i+s_{_R}+\mu_{_{R}}.$\\ Then, by substituting Eq. (17) and Eq. (18) in Eq. (16) and by referring to the [\citenum{Mathematica}, 07.34.21.0011.01], we obtain: \begin{equation} \begin{aligned} \overline{C}_{_R}=& \frac{[(s_{_R}-1)\overline{\gamma}_{_L}]^{s_{_R}}T_{_R}^{\mu_{_{R}}}}{\log(2)B(s_{_R},\mu_{_{R}} )} \left(\frac{c_{_R}}{c_{_R}+\mu_{_{R}}\kappa_{_I})}\right)^{c_{_R}}\\ &\times \sum_{i=0}^{\infty} \frac{({c_{_R}})_{i} (s_{_R} + \mu_{_{R}})_{i} (K_{_R}\mu_{_{R}} \kappa_{_I})^{i} }{i!(\mu_{_{R}})_{i} } \\ &\times \frac{1}{\Phi^{\eta} \Gamma(\eta)}\times G_{3,3}^{3,2} \left( \frac{T_{_R}}{\Phi} \bigg| \begin{matrix} 1-\eta,~ \\ 0, \end{matrix} \begin{matrix} -\alpha,~ \\ -\alpha, \end{matrix}\begin{matrix} 1-\alpha\\ -\alpha \end{matrix} \right). \end{aligned} \end{equation} \subsection{Ergodic Capacity at the Eavesdropper} Using Eq. (13) and Eq. (14), we can write the ergodic capacity as follows \begin{equation} \begin{aligned} \overline{C}_{_E}= \frac{\beta_{_J}^{\alpha_{_J}}}{\Gamma(\alpha_{_J})} \sum_{n=0}^{\nu-1}\sum_{q=0}^{n}{n\choose q}\int\limits_{0}^{\infty}\frac{e^{-\beta_{_I} \gamma}(\beta_{_I} \gamma)^n \Gamma(\Omega)}{(\beta_{_I}\gamma+\beta_{_J})^{\Omega}\log(2)(1+\gamma)n!} d\gamma \end{aligned} \end{equation} To facilitate the integral calculation, we can perform the following transformation into Fox-H function \begin{equation} (\beta_{_I}\gamma+\beta_{_J})^{1-\Omega}=\frac{1}{\beta_{_J}^{\Omega}\Gamma(\Omega)}H_{1,1}^{1,1} \left( \frac{\beta_{_I}}{\beta_{_J}}\gamma \bigg| \begin{matrix} (1-\Omega,1)\\ (0,1) \end{matrix} \right). \end{equation} \begin{equation} \frac{1}{1+\gamma}=H_{1,1}^{1,1} \left( \gamma \bigg| \begin{matrix} (0,1) \\ (0,1) \end{matrix} \right).~~ ; ~~e^{-\beta_{_I}\gamma}=H^{1,0}_{0,1} \left( \beta_{_I}\gamma \bigg| \begin{matrix} -\\ (0,1) \end{matrix} \right). \end{equation} Then, we substitute Eq. (21) and Eq. (22) in Eq. (20) and we compute the integral \cite{TripFoxInt}. Hence, we obtain \begin{equation} \begin{aligned} \overline{C}_{_E}&=\frac{1}{\log(2)\Gamma(\alpha{_J})}\sum_{n=0}^{\nu-1}\sum_{q=0}^{n}{n\choose q}\frac{1}{n!\beta_{_I}\beta_{_I}^{q}} \\ &\times H_{0,1;1,1;1,1}^{1,0;1,1;1,1} \left( \frac{1}{\beta_{_I}},\frac{1}{\beta_{_J}} \bigg| \begin{matrix} (-n;1,1)\\ (-;-) \end{matrix} \bigg| \begin{matrix} (0,1)\\ (0,1) \end{matrix} \bigg| \begin{matrix} (1-\Omega,1)\\ (0,1) \end{matrix} \right). \end{aligned} \end{equation} where $H_{m_{1},n_{1};m_{2},n_{2};m_{3},n_{3}}^{p_{1},q_{1};p_{2},q_{2};p_{3},q_{3}}(-|(\cdot,\cdot))$ is the bivariate Fox H-function \cite{eBivariate_Relay,eT}. Therefore, by substituting Eq. (19) and Eq. (23) into Eq. (15), we obtain the average secrecy capacity as given on the top of the next page. \begin{figure*} \begin{equation} \begin{aligned} \overline{C}_{s}=\frac{1}{\log(2)}&\left\{ \frac{[(s_{_R}-1)\overline{\gamma}_{_L}]^{s_{_R}}T_{_R}^{\mu_{_{R}}}}{B(s_{_R},\mu_{_{R}} )} \left(\frac{c_{_R}}{c_{_R}+\mu_{_{R}}\kappa_{_I})}\right)^{c_{_R}} \sum_{i=0}^{\infty} \frac{({c_{_R}})_{i} (s_{_R} + \mu_{_{R}})_{i} (K_{_R}\mu_{_{R}} \kappa_{_I})^{i} }{i!(\mu_{_{R}})_{i}\Phi^{\eta} \Gamma(\eta) } G_{3,3}^{3,2} \left( \frac{T_{_R}}{\Phi} \bigg| \begin{matrix} 1-\eta,\\ 0, \end{matrix} \begin{matrix} -\alpha, \\ -\alpha, \end{matrix}\begin{matrix} 1-\alpha\\ -\alpha \end{matrix} \right)\right.\\ &-\left. \frac{1}{\Gamma(\alpha{_J})}\sum_{n=0}^{\nu-1}\sum_{q=0}^{n}{n\choose q}\frac{1}{n!\beta_{_I}\beta_{_I}^{q}} H_{0,1;1,1;1,1}^{1,0;1,1;1,1} \left( \frac{1}{\beta_{_I}},\frac{1}{\beta_{_J}} \bigg| \begin{matrix} (-n;1,1)\\ (-;-) \end{matrix} \bigg| \begin{matrix} (0,1)\\ (0,1) \end{matrix} \bigg| \begin{matrix} (1-\Omega,1)\\ (0,1) \end{matrix} \right)\right\} \end{aligned} \end{equation} \sepline \end{figure*} \section{Numerical Results and Discussions} In this Section, and before investigating the security performance of using the jammer, we are discussing firstly the impact of the shadowing(light shadowing $L_S$ and dense shadowing $D_S$) and blockage (light blockage $L_B$ and dense blockage $D_B$) on the legitimate receiver link by studying the Rician Shadowed as a special case of $\mathcal{D}(\cdot)$. Under the values mentioned in \cite{ShadRician}, we note that for $(L_{B}, L_{S})$, we have an outage probability of 0.02 at SNR = 20 dB. For the same SNR, the outage probability jumps to more than 0.56 in the case of $(L_{B}, D_{S})$. Moreover, during the scenario when the transmission link is subject to $(D_{B}, L_{S})$, the outage probability is 0.2 at SNR = 14 dB. However, to maintain the same level of outage (0.2), the communication link needs to satisfy higher SNR of at least 28 dB when the shadowing becomes dense ($D_{B}, D_{S}$). Therefore, we deduce that the shadowing effect dominates the blockage. In other words, our communication system is sensitive to the shadowing effect much more than the blockage density. \begin{figure}[H] \includegraphics[height=65mm, width=0.95\linewidth]{BlockageShodowing.eps} \caption{Impact of the shadowing and the blockage density on the outage probability at the legitimate receiver. } \end{figure} \begin{figure}[H] \includegraphics[height=65mm, width=0.95\linewidth]{OutageEve.eps} \caption{Outage probability at the eavesdropper versus distance from the jammer to the eavesdropper $r_{_{J,E}}$. } \end{figure} In Fig. 3, we studied the outage probability at $E$ versus a range of $r_{_{J,E}}\in$ [0 m 30 m], and we fixed $P_{_J}$ at 5 dB, $P_{_S}$ = 10 dB. Two different threshold scenarios were proposed: $\zeta$ = -8 dB and $\zeta$ = -2 dB. If we suppose that to ensure secure communication, the outage probability should be more than $10^{-2}$ for $\zeta$= -8 dB. In this case, $r_{_{J,E}}$ should be less than 1 m when the attacker is very close to the source ($r_{_{S,E}} $= 1 m). If the attacker starts to be 10 m away from $S$, the communication link is protected if $r_{_{J,E}} <$ 15 m. In the second scenario, when the security requirements necessitate $\zeta$ = -2 dB, the jammer should be at most 1.5 m away from the eavesdropper to accomplish an outage probability of more than $10^{-2}$ in case that $r_{_{S,E}}$ = 1 m. Both scenarios emphasize on the impact of the distance separating $J$ from $E$ in the critical case where $r_{_{S,E}} \leq $ 10 m. In those scenarios, the use of the $J$ will be very effective to protect the communication link within a short distance range. As $r_{_{J,E}}$ increases, the outage cannot decrease anymore and we have an outage floor. This is due to the impact of the path loss on the jamming signal, of course under the aforementioned fading model, power setups, and number of antennas at $J$. Hence, to increase the jamming range of $J$ and mitigate the path loss effect, we have to increase $P_{_S}$ and K, which will be discussed with further details in the Fig. 5. \begin{figure} \includegraphics[height=63mm, width=0.90\linewidth]{ErgodicEve.eps} \caption{Effect of the jamming power signal $P_{J}$ on the ergodic capacity at the eavesdropper end.} \end{figure} Fig. 4 demonstrates the impact of the jamming power generated by the jammer $P_{_J}$ on the ergodic capacity at the eavesdropper $E$ with respect to the source power $P_{_{S}}$. In the first scenario, by fixing $r_{_{J,E}}$ = 10 m, we can observe that increasing $P_{_J}$ has no effect on the ergodic capacity for low transmission power region $P_{_{S}}\in $[-10 dB, 8 dB], since the average capacity is already null because of the low values of $P_{_{S}}$. Hence, using a jammer will not make any difference within this power interval. However, for high $P_{_{S}}$ values such as 25 dB and by using $P_{_J}$ = -5 dB, the ergodic capacity reaches about 1.7 bps/Hz. Therefore, we need to increase the jamming power to at least $P_{_J}$ = 15 dB to decrease the average capacity to 0.95 bps/Hz and to $P_{_J}$ = 30 dB to achieve a rate = 0.05 bps/Hz. In the second scenario, when $r_{_{J,E}}$ = 70 m $> r_{_{S,E}}$ (the attacker is closer to the BS than to the jammer), we remark that for low range of $P_{_{S}}$, the influence of the AN signal is similar to the previous scenario. On the other hand, for a high $P_{_{S}}$, the difference between the rate corresponding to $P_{_J}$ = -5 dB and $P_{_J}$ = 15 dB is negligible, where we need at least 30 dB at the jammer to decrease the rate from 3.1 bps/Hz to 2.7 bps/Hz. Therefore, the communication could be secured with low jamming power if $r_{_{J,E}} <$ $ r_{_{S,E}}$, otherwise, we need to increase $P_{_{S}}$. \begin{figure} \includegraphics[height=63mm, width=0.9\linewidth]{SecrecyCapacity.eps} \caption{Average secrecy capacity for different number of antenna at the jammer.} \end{figure} Now, we are focusing of the relation between the study of the average secrecy capacity $\overline{C_{_s}}$ and the number of antennas K, at the jammer. As shown if Fig. 5 and by fixing the number of antennas at the BS to N = 4, we observe that the average secrecy capacity for different values of K are tightly close within the BS power $\in$ [-10dB 18 dB]. We note that as $P_{_S}$ increases, we start to notice the impact of K on the average secrecy capacity. If we need $\overline{C_{_s}}$ equal to at least 5.5 bps/Hz for $P_{_S}$ = 35 dB, we can only achieve 4 bps/Hz when we are not using any jammer. To satisfy such requirement, we need at least 4 antennas. Therefore, the number of antennas K, has a major impact on the performances of security scheme. \section{Conclusion} In this paper, we examined the PLS in the wireless vehicular network while the communication is subject to an eavesdropping attack. We proposed the use of a friendly jammer that will transmit AN to perturb the attacker's channel and decrease its SINR. As a channel model, we adopted the recently proposed Double $\kappa$-$\mu$ Shadowed Fading model, which provides a variety of fading distribution models. To evaluate the security performance, we studied the outage probability and the secrecy capacity with respect to special fading models such as Rician shadowed and Nakagami-m. The results showed that the jammer has a significant impact on the security performance where we can obtain a notable outage probability at the eavesdropping end, especially for short range distance separating the jammer from the attacker. Moreover, the plots demonstrated the improvement of the average secrecy capacity by equipping the jammer with massive number of antennas. \section*{Appendix} To prove that Nakagami-m distribution is a special case of the envelope of $\mathcal{D}(\cdot)$, we can refer to the envelope equation [\citenum{channel}, Eq. (5)] and substitute $c_{_l}= s_{_l}= \infty $, $\kappa_{_l}$ = 0 and $\mu_{_l}$ = m. Then we perform the following computation: \begin{equation} \begin{aligned} &\lim \limits_{\substack{c_{_l}\to\infty\\s_{_l}\to\infty}} { f}_{X_{_l}}(X) =\lim \limits_{\substack{c_{_l}\to\infty\\s_{_l}\to\infty}}\frac{2(s_{_l}-1)^{s_{_l}}c_{_l}^{c_{_l}}T_{_l}^{\mu_{_l}}X^{2\mu_{_l}-1}{\hat{X}}^{2s_{_l}}}{\left(T_{_l}X^{2}+(s_{_l}-1) \hat{X}^{2}\right)^{s_{_l}+\mu_{_l}}}\\ &\frac{(c_{_l}+\mu_{_l}\kappa_{_l})^{-c_{_l}}}{B(s_{_l},\mu_{_l})}\times_{2}F_{1}\left( c_{_l}, s_{_l}+\mu_{_l};\mu_{_l}; \frac{K_{_l}\mu_{_l}\kappa_{_l}X}{T_{_l}X+(s_{_l}-1)\hat{X}^{2}}\right)\\ &~~~~=\lim \limits_{s_{_l}\to\infty} 2m^{m}\left( \frac{s_{_l}-1}{\frac{mX^{2}}{\hat{X}^2}+s_{_l}-1}\right)^{s_{_l}}\frac{X^{2m-1}\Gamma(s_{_l}+m)}{\hat{X}^{2(s_{_l}+m)}\Gamma(s_{_l}+m)}\\ &~~~~= \frac{2}{\Gamma(m)}\left(\frac{m}{\hat{X}^{2}}\right)^{m}\exp{\left(-\frac{mX^{2}}{\hat{X}^2}\right)}X^{2m-1}, \end{aligned} \end{equation} where $X$ is the random variable, $m$ is the shape parameter, $\hat{X}$ is the root mean square (rms) of the signal envelope, $T_{_l}$ = m, $K_{_l}$ = 0, $_{2}F_{1}(\cdot,\cdot;\cdot;0)$ = 0 according to the identity: [\citenum{Mathematica}, 07.23.03.0001.01], and $\hat{X}^{2}$ is the controlling spread parameter. \bibliographystyle{IEEEtran} \section{Introduction} \subsection{Background and Literature Review} Wireless vehicular network (WVN) has been exponentially evolving by taking advantage of sophisticated technologies such as artificial intelligence (AI) \cite{ai}, machine learning \cite{ml}, Millimeter Wave (mmWave) \cite{eJEns_Mmwave,eRelayFSO3, mmw,eAppr,mmwArx }, and 5G \cite{5g,eRelayFSO2,eRelayFSO1}, etc. It offers a variety of advanced features like real-time alerting messages \cite{realTime}, cloud services \cite{cloud, eVCloud}, etc. However, as any communication network, WVN is subject to many challenges, especially the security issue which is a peer parameter that guarantees a certain acceptable quality of services (QoS). \\ Several research papers discussed the security challenges and proposed a variety of solutions from different OSI (Open Systems Interconnection model) layers perspective \cite{generalSec1,generalSec2}. Regarding the physical layer security (PLS), it has been proven that securing the communication at this level is an efficient scheme to deal with passive threats like the eavesdropping attacks. In this kind of menace, the malicious network entity is able to listen to a private communication by intercepting the transmitted signal and revealing the confidential information.\let\thefootnote\relax\footnotetext{ This work was supported in part by the U.S. National Science Foundation (NSF) under the grant CNS-1650831.} What makes the eavesdropping attacks very critical is the fact that it is difficult to be detected by the victims since it could be processed without leaving any traces. To deal with this security dilemma, some related works proposed the use of artificial noise (AN) where the transmitter perturbs the attacker channel by sending a dedicated signal \cite{an1,an2}. This signal is generated orthogonally to the main link between the legitimate entities so that it will affect only the eavesdropper link. Other papers studied the employment of a friendly jammer (J) which has the responsibility of jamming the eavesdroppers' channels instead of the transmitter \cite{jam1,jam2}. \\ In general, by employing a jammer $J$, as a third network entity, could be more efficient in the case of multiple eavesdroppers attacks. In other words, if the network is subject to several attacks, it is better to have a jammer node to deal with all the attackers rather than each communicating node deals with all the attackers by itself (it will be very expensive in terms of power if each transmitter will dedicate a fraction of its power to jam all the eavesdroppers' channels). \subsection{Our Contribution} In our paper, we are focusing on using the friendly jammer to protect V2I communications. As a channel fading, we adopt the Double Shadowed $\kappa$-$\mu$ Fading Model, noted by $\mathcal{D}(\cdot)$, recently presented in \cite{channel}, which is more general and covers wide ranges of fading such as double shadowed Rice, Rician shadowed, Nakagami-q, Nakagami-m, Rayleigh, one-sided Gaussian, etc. We highlight our contributions as follows: \begin{enumerate} \item[$\bullet$] We propose the use of friendly jammer $J$ in V2I communications under an eavesdropping attack scenario. \item[$\bullet$] We adopt the new channel model $\mathcal{D}(\cdot)$. \item[$\bullet$] We derive a closed form expression for the ergodic capacity at the receiver under $\mathcal{D}(\cdot)$ model. \item[$\bullet$] We derive a closed form expression for cumulative distribution function (CDF) of the signal-to-interference-plus-noise-ratio (SINR) and the ergodic capacity at the eavesdropper while considering Nakagami-m as special case of $\mathcal{D}(\cdot)$. \item[$\bullet$] We examine the impact of the blockage density at the receiver by adopting the special case model Rician shadowed. \end{enumerate} \subsection{Paper Structure} The paper is constructed as follows: Section \RN{2} introduces the system model. Section \RN{3} studies the outage probability while Section \RN{4} examines the ergodic capacity and the secrecy capacity. Then, Section \RN{5} evaluates the performance of the security approach based on numerical results. Finally, we outline our conclusion in Section \RN{6}. \section{System Model} \subsection{Vehicular Communications and Attack Model} \begin{figure}[H] \includegraphics[height=55mm, width=\linewidth]{attackmodel.PNG} \caption{V2I communications in the presence of multiple eavesdroppers. } \end{figure} In V2I communications, base stations (BSs) and vehicles should be able to exchange information and data securely. However, the transmitted signals may be subject to an intended overhearing where an eavesdropper intercepts the signal and reveals the secret messages. In Fig. 1, we have a typical model where the attacker \textit{E} is listening to the communication between the source $S$ (the base station) and the legitimate receiver $R$. In this case, $J$ tends to protect the network by sending AN to the attacker $E$, while $R$ is immune since the AN is orthogonal to its channel. \subsection{Channel Model} The received signal at \textit{R} and \textit{E} are respectively: \begin{equation} \begin{aligned} { y_{_R}}& = \sum_{n=1}^{N}h_{_{S,R;n}}x_{_I} + \sum_{k=1}^{K}h_{_{J,R;k}}x_{_J} +w_{_R}\\ &= \sum_{n=1}^{N}h_{_{S,R;n}}x_{_I} +w_{_R}, \end{aligned} \end{equation} \begin{equation} \begin{aligned} { y_{_E}}& =\sum_{n=1}^{N}h_{_{S,E;n}}x_{_I} + \sum_{k=1}^{K}h_{_{J,E;k}}x_{_J} +w_{_E}, \end{aligned} \end{equation} where the channel model parameters are defined in Table I. \begin{table}[h] \caption{ Channel model parameters description} \begin{tabular}{|c|c|} \hline \rowcolor[HTML]{FFCE93} Parameters & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCE93}Description} \\ \hline N & \multicolumn{1}{c|}{Number of antennas at the BS} \\ \hline \rowcolor[HTML]{EFEFEF} K & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}Number of antennas at the Jammer \textit{J }} \\ \hline $h_{_{a,b;c}}$ & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}The fading amplitude of the channel corresponding\\ to the link between the antenna $c$ of the node $a$\\ and the receiving node $b$.\\ $a\in\{J,S\}$ and $b\in\{R,E\}$\end{tabular}} \\\hline \cline{2-2} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}$g_{_{a,b}}$ is the channel gain where $|g_{a,b}|^2$ $\sim \mathcal{D}(\cdot)$} \\ \cline{2-2} & \multicolumn{1}{c|}{\begin{tabular}[]{@{}c@{}}$r_{_{a,b;c}}$ is the distance between the antenna $c$ \\of the node $a$ and the receiving node $b$\end{tabular}} \\ \cline{2-2} \multirow{-4}{*}{\begin{tabular}[c]{@{}c@{}}$h_{_{a,b;c}}$\\ $=g_{_{a,b;c}}\sqrt{r_{_{a,b;c}}^{-\delta}}$\end{tabular}} & \cellcolor[HTML]{EFEFEF}$\delta$ is path loss exponent \\ \hline \rowcolor[HTML]{EFEFEF} $x_{_I}$ & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\begin{tabular}[]{@{}c@{}}The confidential information signal\\ sent by $S$ with power $P_{_S}$/antenna \end{tabular}} \\ \hline $x_{_J}$ & \multicolumn{1}{c|}{\begin{tabular}[]{@{}c@{}}jamming signal (AN) emitted by $J$\\ with power $P_{_J}$/antenna \end{tabular}} \\ \hline \rowcolor[HTML]{EFEFEF} $w_{_b}$ & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\begin{tabular}[]{@{}c@{}}The additive white Gaussian noise (AWGN) \\at the node $b$ with variance ${\sigma_{w_{_b}}^2}$ \end{tabular}} \\ \hline \end{tabular} \end{table} As we can deduce from Eq. (1), $\sum_{k=1}^{K}h_{_{J,R}}x_{_J}=0$, which means that the jamming signal will only affect the attacker while conserving the same signal at the legitimate receiver.\\ The received SNR and SINR at $R$ and $E$, respectively, are \begin{equation} \begin{aligned} { \gamma_{_R}}=\frac{ \sum_{n=1}^{N}P_{_S}|g_{_{S,R}}|^{2}r_{_{S,R;n}}^{-\delta} }{\sigma_{w_{_R}}^2} = \sum_{n=1}^{N}\gamma_{_{R;n}}, \end{aligned} \end{equation} \begin{equation} \begin{aligned} { \gamma_{_E}} =&\frac{\sum_{n=1}^{N}P_{_S}|h_{_{S,E;n}}|^{2} }{\sum_{k=1}^{K}P_{_{J}}|h_{_{J,E;k}}|^{2}+{\sigma_{w_{_E}}^2}}=\frac{ \frac{\sum_{n=1}^{N}P_{_S}|g_{_{S,E;n}}|^{2}r_{_{S,E;k}}^{-\delta}}{\sigma_{w_{_E}}^2}} {1+\frac{\sum_{k=1}^{K}P_{_{J}}|g_{_{J,E;k}}|^{2}r_{_{J,E;k}}^{-\delta}}{{\sigma_{w_{_E}}^2}}}\\&=\frac{ \sum_{n=1}^{N}\gamma{_{_{I;n}}} }{1+\sum_{k=1}^{K}\gamma{_{_{J;k}}}}=\frac{ \gamma{_{_{I}}} }{1+\gamma{_{_{J}}}}, \end{aligned} \end{equation} where $\gamma_{_{R;n}} $ $\sim$ $\mathcal{D}(\cdot)$ and $\gamma{_{_{I;n}}}$ $\sim$ $\mathcal{D}(\cdot)$ are, respectively, the SNRs corresponding to the confidential signal received at $R$ and $E$ via the $n$-th antenna, while $\gamma{_{_{J;k}}}$ $\sim$ $\mathcal{D}(\cdot)$ is the SNR due to the jamming signal sent by $J$ at $E$ via the $k$-th antenna. The general probability density function (PDF) and CDF of the random variable (RV) $\gamma_{_l}$, where $ l\in \{(I;n), (J;k), (R;n) \}$, are respectively \begin{equation} \begin{aligned} { f}_{\gamma_{_l}}(\gamma) &=\frac{(s_{_l}-1)^{s_{_l}}c_{_l}^{c_{_l}}T_{_l}^{\mu_{_l}}\gamma^{\mu_{_l}-1}\overline{\gamma}^{s_{_l}}}{(c_{_l}+\mu_{_l}\kappa_{_l})^{c_{_l}}B(s_{_l},\mu_{_l})(T_{_l}\gamma+(s_{_l}-1) \overline{\gamma})^{s_{_l}+\mu_{_l}}}\\ &\times_{2}F_{1}\left( c_{_l}, s_{_l}+\mu_{_l};\mu_{_l}; \frac{K_{_l}\mu_{_l}\kappa_{_l}\gamma}{T_{_l}\gamma+(s_{_l}-1)\overline{\gamma}}\right), \end{aligned} \end{equation} \begin{equation} \begin{aligned} &{ F}_{\gamma_{_l}}(\gamma) =\left( \frac{c_{_l}}{c_{_l}+\kappa_{_l}\mu_{_l}}\right)^{c_{_l}} \left( \frac{T_{_l}\gamma}{\overline{\gamma}(s_{_l}-1)}\right)^{\mu_{_l}} \sum_{i=0}^{\infty} \left(\frac{K_{_l}\mu_{_l}\kappa{_l}\gamma}{(s_{_l}-1)\overline{\gamma}}\right)^i \\& \frac{(c_{_l})_i+(i+\mu_{_l})_{s_{_l}}}{i!\Gamma(s_{_l})(i+\mu_{_l})} {}_{2}F_{1}( i+\mu_{_l},i+\mu_{_l}+ s_{_l};i+\mu_{_l}+1;\mu_{_l}; \tau), \end{aligned} \end{equation} where $_{2}F_{1}(\cdot,\cdot;\cdot;.)$ represents the Hypergeometric function, $B(\cdot,\cdot)$ denotes the Beta function, $\tau=\frac{-T_{_l}\gamma}{\overline{\gamma}(s_{_l}-1)}$ and $(x)_{i}=\frac{\Gamma(x+i)}{\Gamma(x)}$ is the the Pochhammer symbol. For the sake of organization, the parameters on which depends the distribution of $\gamma_{_l}$, are presented in Table. II. \begin{table}[h] \caption{ $\mathcal{D}(\cdot)$ parameters description\cite{channel}} \begin{tabular}{|r|c|c|} \hline \rowcolor[HTML]{FFCE93} \multicolumn{1}{|c|}{\cellcolor[HTML]{FFCE93}SNR} & Parameters & Description \\ \hline & $c_{_l}$ & Shape of the Nakagami-m RV \\ \cline{2-3} & \cellcolor[HTML]{EFEFEF}$s_{_l}$ & \cellcolor[HTML]{EFEFEF}Shape of the inverse of Nakagami-m RV \\ \cline{2-3} & $\mu_{_l}$ & Number of multipath clusters \\ \cline{2-3} & \cellcolor[HTML]{EFEFEF}$\kappa_{_l}$ & \cellcolor[HTML]{EFEFEF}\begin{tabular}[]{@{}c@{}}The ratio of the total power of the dominant\\ components to the scattered waves\end{tabular} \\ \cline{2-3} & $T_{_l}$ & $\mu_{_l}(1+\kappa_{_l})$ \\ \cline{2-3} \multirow{-6}{*}{\begin{tabular}[c]{@{}r@{}}$\gamma_{_l} $\end{tabular}} & \cellcolor[HTML]{EFEFEF}$K_{_l}$ & \cellcolor[HTML]{EFEFEF} $\frac{K_{_l}}{(c_{_l}+\mu_{_l}*\kappa_{_l})}$ \\ \hline \end{tabular} \end{table} \section{Outage Probability Analysis} \subsection{Outage Probability at the Legitimate Receiver } To analyze the outage probability at $R$, we have to refer to the corresponding CDF. We know that $\gamma_{_{R;n}} $ $\sim$ $\mathcal{D}(\cdot)$, hence its CDF is given by Eq. (6). However, the closed-form expression of the CDF corresponding of $\sum_{n=1}^{N}\gamma_{_{R;n}} $ is not tractable. Therefore, we are assuming that N = 1, which make the outage probability of $\gamma_{_{R}}$ represented by Eq. (6). At this stage, we are referring to the special fading case Rician shadowed under the following substitutions: $c_{R}\rightarrow\infty$ and $\mu = 1$ \cite{channel}. Therefore, the CDF is expressed as follows \cite{ShadRicianCDF} \begin{equation} \begin{aligned} { F}_{\gamma_{_R}}(\gamma) = \frac{1}{\Gamma(m)}\left(\frac{m}{\Xi}\right)^m \sum_{i=0}^{\infty}\frac{\Gamma(m+i)\gamma\left(i+1,\frac{\gamma}{\overline{\gamma}2\sigma^2}\right)}{\sigma^{2i}2^{i}i!\Gamma(1+i)\left(\frac{1}{2\sigma^{2}}+\frac{m}{\Xi}\right)^{m+i}}, \end{aligned} \end{equation} where $\gamma(\cdot,\cdot)$ is the Incomplete Gamma function, $\Xi$ is the average power of the line of sight (LOS) component, $2\sigma^2$ is the average power of the scatter component, $m$ is fading figure which represents the fading severity. Therefore, the outage related to the density of shadowing, $F_{\gamma_{_{SD}}}$ could be presented by \begin{equation} \begin{aligned} F_{\gamma_{_{SD}}}(\gamma) = p_{_{los}}F_{\gamma_{_{SD}}}^{_{los}}(\gamma) +(1-p_{_{los}})F_{\gamma_{_{SD}}}^{_{nlos}}(\gamma) , \end{aligned} \end{equation} where $F_{\gamma_{_{SD}}}^{_{los}}(\gamma)$ and $F_{\gamma_{_{SD}}}^{_{nlos}}(\gamma)$ are the CDFs of the SNR evaluated when the link is LOS, and non-line-of-side (NLOS), respectively. \subsection{Outage Probability at the Attacker } It is complex to derive a closed-form expression of the CDF at the eavesdropper $E$. However, we can take advantage of the $\mathcal{D}(\cdot)$ generality by considering special channel model cases by manipulating its parameters. We suggest that $h_{S,E}$ and $h_{J,E}$ follow Nakagami-m distribution by fixing $c_{_R}=s_{_R}=\infty$, $\mu_{_I}=1$ and $\kappa_{_I}=m$\cite{channel}. Hence, $\gamma_{_{I;n}}$ and $\gamma_{_{J;k}}$ (see Eq. (4) ) are described by Gamma distribution ($\gamma_{_{I;n}} \sim\textit{\textsf{G}}(\nu_{_{I,n}},\beta_{_{I,n}})$ and $\gamma_{_{J;k}} \sim\textit{\textsf{G}}(\nu_{_{J,k}},\beta_{_{J,k}})$ ) , which has the following PDF and CDF \begin{equation} \begin{aligned} { f}_{\gamma_{_d}}(\gamma) =\frac{\beta_{_d}^{\nu_{_d}}\gamma^{\left(\nu_{_d}-1\right)}\exp({-\beta_{_d}\gamma})}{\Gamma(\nu_{_d})}, \end{aligned} \end{equation} \begin{equation} \begin{aligned} { F}_{\gamma_{_d}}(\gamma) & =1-\frac{\Gamma(\nu_{_d},\beta_{_d}\gamma) }{\Gamma(\nu_{_d})}, \end{aligned} \end{equation} where $d\in\{(I;n),(J;k)\}$, $\beta_{_h}$ and $\nu{_h}$ are, respectively, the scale and the shape parameters. Each antenna of the BS has the same distance from the attacker because the antennas are collocated. Therefore the parameter $\beta_{_{I,n}}$ is the same for all antennas $n\in {1,...,N}$ ($\beta{_{I}}$=$\beta_{_{I;n}}$) since $\beta_{_{S,E;n}}=\frac{ P_{_S} r_{_{S,E;n}}^{-\delta}}{\sigma_{w_{_E}}^2}$. We assume that all \textit{N} links have the same scale parameter, which means $\nu{_{I}}$=$\sum_{n=1}^{N}\nu_{_{I;n}}$. Therefore $\sum_{n=1}^{N} \gamma_{_{I;n}} = \gamma_{_{I}}$ $\sim$ $\textit{\textsf{G}}(\nu_{_{I}},\beta_{_{I}})$. The same analogy applied to the signal issued by the jammer where $\sum_{k=1}^{K} \gamma_{_{J;k}} = \gamma_{_{J}}$ $\sim$ $\textit{\textsf{G}}(\nu_{_{J}},\beta_{_{J}})$. For the sake of mathematical simplicity, we assume that $\nu_{_h}$ is a positive integer. Accordingly, the CDF has the subsequent series expansion as: \begin{equation} \begin{aligned} { F}_{\gamma_{_d}}(\gamma) =1- \sum_{n=0}^{\nu_d-1} \frac{(\beta_{_d}\gamma)^{n} e^{-\beta_{_d}\gamma}}{n!} . \end{aligned} \end{equation} Before deriving the CDF at $E$ following the aforementioned special cases, we should mention that we verified and proved obtaining Nakagami-m distribution by using the general PDF of the envelope corresponding to $\mathcal{D}(\cdot)$ and substituting the appropriate parameters (Please refer to the Appendix).\\ By referring to Eq. (4), Eq. (9), and Eq. (11), the CDF at the attacker can be derived using the following expression \begin{equation} \begin{aligned} &F_{\gamma_{E}}=\int\limits_{0}^{\infty}F_{\gamma_{_I}}(\gamma[1+\gamma_{_J}])f_{\gamma_{_J}}(\gamma_{_J})d\gamma_{_J}\\ &=1-\frac{e^{-\beta_{_I}\gamma}{\beta_{_J}}^{\nu_{_J}}} {\Gamma(\nu_{_J})}\sum_{n=0}^{\nu_{_I}-1}\sum_{q=0}^{i}{n\choose q}\int\limits_{0}^{\infty}\frac{\gamma_{_J}^{\Omega-1} e^{-\gamma_{_J}(\beta_{_I} \gamma+\beta_{_J})}}{n!(\beta_I\gamma)^{-n} }d\gamma_{_J} \end{aligned} \end{equation} where $\Omega=q+\nu_{_J}$.\\ Then, by referring to [\citenum{Tab}, Eq (3.351.3)], we obtain \begin{equation} \begin{aligned} { F}_{\gamma_{_E}}(\gamma) &=1-\frac{e^{-\beta_{_I}\gamma}{\beta_{_J}}^{\nu_{_J}}} {\Gamma(\nu_{_J})}\sum_{n=0}^{\nu_{_I}-1}\sum_{q=0}^{i}{n\choose q}\frac{\Gamma(\Omega)(\beta_{_I}\gamma+\beta_{_J})^{-\Omega}}{n!(\beta_I\gamma)^{-n}} \end{aligned} \end{equation} \section{Average Secrecy Capacity Analysis} The generalized formula of the ergodic capacity for a given SNR $\gamma$ is expressed by \begin{equation} \begin{aligned} \overline{C}_{p}&= \mathbb{E}\left[\log_{2}(1+\gamma)\right]=\int_{0}^{\infty} \log_{2}(1+\gamma)f_{\gamma_{p}}(\gamma)d\gamma\\ &=\frac{1}{\log(2)}\int_{0}^{\infty} \frac{\overline{F}_{\gamma_{p}}(\gamma)}{1+\gamma} d\gamma, \end{aligned} \end{equation} where $p\in\{L,E\}$ and $\overline{F}_{\gamma_{p}}$ is the complementary of the CDF.\\ The general average secrecy capacity $\overline{C}_{_S}$ can be defined by \begin{equation} \overline{C}_{s} = \begin{cases} \overline{C}_{_R}-\overline{C}_{_{E}}, & \mbox{if } \gamma_{_R}> \gamma_{_E} \\ 0, & \mbox{if } \gamma_{_R}< \gamma_{_E} \end{cases} \end{equation} where $\overline{C}_{_L}$ is the average capacity of the main link (between $S$ and $R$) and $\overline{C}_{_E}$ is the average capacity at the eavesdropper $E$. \subsection{Ergodic Capacity at the Legitimate Receiver} By referring to Eqs. (5, 6, and 14), the ergodic capacity at the legitimate receiver can be expressed by: \begin{equation} \begin{aligned} \overline{C}_{R} =& \int\limits_{0}^{\infty}\frac{[(s_{_R}-1)\overline{\gamma}_{_L}]^{s_{_R}}T_{_R}^{\mu_{_{R}}}}{\log(2)B(s_{_R},\mu_{_{R}} )} \left(\frac{c_{_R}}{c_{_R}+\mu_{_{R}}\kappa_{_I})}\right)^{c_{_R}}\\ &\times \sum_{i=0}^{\infty} \frac{({c_{_R}})_{i} (s_{_R} + \mu_{_{R}})_{i} (K_{_R}\mu_{_{R}} \kappa_{_I})^{i} }{i!(\mu_{_{R}})_{i} } \\ &\times \frac{\log(1+\gamma)\gamma^{\mu_{_{R}}+i-1}}{K\gamma +[(s_{_R}-1)\overline{\gamma}_{_L}]^{i+s_{_R}+\mu_{_{R}}}}d\gamma. \end{aligned} \end{equation} To find a closed form of the aforementioned expression, we rewrite the following expressions as follow \cite{eTransG} \begin{equation} \begin{aligned} \frac{1}{(T\gamma +\Phi)^{\eta}}=\frac{1}{\Phi^{\eta} \Gamma(\eta)}G_{1,1}^{1,1} \left( \frac{T}{\Phi} \bigg| \begin{matrix} 1-\eta\\ 0 \end{matrix} \right) \end{aligned} \end{equation} \begin{equation} \begin{aligned} \log(1+\gamma)=G_{2,2}^{1,2} \left( \gamma \bigg| \begin{matrix} 1,\\ 1, \end{matrix} \begin{matrix} 1\\ 0 \end{matrix} \right), \end{aligned} \end{equation} where $G^{m,n}_{p,q} \left( \begin{matrix} - | (\cdot,\cdot) \end{matrix}\right)$ is the Meijer G-Function, $\Phi=(s_{_R}-1)\overline{\gamma}_{_L}, \alpha = \mu_{_{R}}+i,$ and $\eta = i+s_{_R}+\mu_{_{R}}.$\\ Then, by substituting Eq. (17) and Eq. (18) in Eq. (16) and by referring to the [\citenum{Mathematica}, 07.34.21.0011.01], we obtain: \begin{equation} \begin{aligned} \overline{C}_{_R}=& \frac{[(s_{_R}-1)\overline{\gamma}_{_L}]^{s_{_R}}T_{_R}^{\mu_{_{R}}}}{\log(2)B(s_{_R},\mu_{_{R}} )} \left(\frac{c_{_R}}{c_{_R}+\mu_{_{R}}\kappa_{_I})}\right)^{c_{_R}}\\ &\times \sum_{i=0}^{\infty} \frac{({c_{_R}})_{i} (s_{_R} + \mu_{_{R}})_{i} (K_{_R}\mu_{_{R}} \kappa_{_I})^{i} }{i!(\mu_{_{R}})_{i} } \\ &\times \frac{1}{\Phi^{\eta} \Gamma(\eta)}\times G_{3,3}^{3,2} \left( \frac{T_{_R}}{\Phi} \bigg| \begin{matrix} 1-\eta,~ \\ 0, \end{matrix} \begin{matrix} -\alpha,~ \\ -\alpha, \end{matrix}\begin{matrix} 1-\alpha\\ -\alpha \end{matrix} \right). \end{aligned} \end{equation} \subsection{Ergodic Capacity at the Eavesdropper} Using Eq. (13) and Eq. (14), we can write the ergodic capacity as follows \begin{equation} \begin{aligned} \overline{C}_{_E}= \frac{\beta_{_J}^{\alpha_{_J}}}{\Gamma(\alpha_{_J})} \sum_{n=0}^{\nu-1}\sum_{q=0}^{n}{n\choose q}\int\limits_{0}^{\infty}\frac{e^{-\beta_{_I} \gamma}(\beta_{_I} \gamma)^n \Gamma(\Omega)}{(\beta_{_I}\gamma+\beta_{_J})^{\Omega}\log(2)(1+\gamma)n!} d\gamma \end{aligned} \end{equation} To facilitate the integral calculation, we can perform the following transformation into Fox-H function \begin{equation} (\beta_{_I}\gamma+\beta_{_J})^{1-\Omega}=\frac{1}{\beta_{_J}^{\Omega}\Gamma(\Omega)}H_{1,1}^{1,1} \left( \frac{\beta_{_I}}{\beta_{_J}}\gamma \bigg| \begin{matrix} (1-\Omega,1)\\ (0,1) \end{matrix} \right). \end{equation} \begin{equation} \frac{1}{1+\gamma}=H_{1,1}^{1,1} \left( \gamma \bigg| \begin{matrix} (0,1) \\ (0,1) \end{matrix} \right).~~ ; ~~e^{-\beta_{_I}\gamma}=H^{1,0}_{0,1} \left( \beta_{_I}\gamma \bigg| \begin{matrix} -\\ (0,1) \end{matrix} \right). \end{equation} Then, we substitute Eq. (21) and Eq. (22) in Eq. (20) and we compute the integral \cite{TripFoxInt}. Hence, we obtain \begin{equation} \begin{aligned} \overline{C}_{_E}&=\frac{1}{\log(2)\Gamma(\alpha{_J})}\sum_{n=0}^{\nu-1}\sum_{q=0}^{n}{n\choose q}\frac{1}{n!\beta_{_I}\beta_{_I}^{q}} \\ &\times H_{0,1;1,1;1,1}^{1,0;1,1;1,1} \left( \frac{1}{\beta_{_I}},\frac{1}{\beta_{_J}} \bigg| \begin{matrix} (-n;1,1)\\ (-;-) \end{matrix} \bigg| \begin{matrix} (0,1)\\ (0,1) \end{matrix} \bigg| \begin{matrix} (1-\Omega,1)\\ (0,1) \end{matrix} \right). \end{aligned} \end{equation} where $H_{m_{1},n_{1};m_{2},n_{2};m_{3},n_{3}}^{p_{1},q_{1};p_{2},q_{2};p_{3},q_{3}}(-|(\cdot,\cdot))$ is the bivariate Fox H-function \cite{eBivariate_Relay,eT}. Therefore, by substituting Eq. (19) and Eq. (23) into Eq. (15), we obtain the average secrecy capacity as given on the top of the next page. \begin{figure*} \begin{equation} \begin{aligned} \overline{C}_{s}=\frac{1}{\log(2)}&\left\{ \frac{[(s_{_R}-1)\overline{\gamma}_{_L}]^{s_{_R}}T_{_R}^{\mu_{_{R}}}}{B(s_{_R},\mu_{_{R}} )} \left(\frac{c_{_R}}{c_{_R}+\mu_{_{R}}\kappa_{_I})}\right)^{c_{_R}} \sum_{i=0}^{\infty} \frac{({c_{_R}})_{i} (s_{_R} + \mu_{_{R}})_{i} (K_{_R}\mu_{_{R}} \kappa_{_I})^{i} }{i!(\mu_{_{R}})_{i}\Phi^{\eta} \Gamma(\eta) } G_{3,3}^{3,2} \left( \frac{T_{_R}}{\Phi} \bigg| \begin{matrix} 1-\eta,\\ 0, \end{matrix} \begin{matrix} -\alpha, \\ -\alpha, \end{matrix}\begin{matrix} 1-\alpha\\ -\alpha \end{matrix} \right)\right.\\ &-\left. \frac{1}{\Gamma(\alpha{_J})}\sum_{n=0}^{\nu-1}\sum_{q=0}^{n}{n\choose q}\frac{1}{n!\beta_{_I}\beta_{_I}^{q}} H_{0,1;1,1;1,1}^{1,0;1,1;1,1} \left( \frac{1}{\beta_{_I}},\frac{1}{\beta_{_J}} \bigg| \begin{matrix} (-n;1,1)\\ (-;-) \end{matrix} \bigg| \begin{matrix} (0,1)\\ (0,1) \end{matrix} \bigg| \begin{matrix} (1-\Omega,1)\\ (0,1) \end{matrix} \right)\right\} \end{aligned} \end{equation} \sepline \end{figure*} \section{Numerical Results and Discussions} In this Section, and before investigating the security performance of using the jammer, we are discussing firstly the impact of the shadowing(light shadowing $L_S$ and dense shadowing $D_S$) and blockage (light blockage $L_B$ and dense blockage $D_B$) on the legitimate receiver link by studying the Rician Shadowed as a special case of $\mathcal{D}(\cdot)$. Under the values mentioned in \cite{ShadRician}, we note that for $(L_{B}, L_{S})$, we have an outage probability of 0.02 at SNR = 20 dB. For the same SNR, the outage probability jumps to more than 0.56 in the case of $(L_{B}, D_{S})$. Moreover, during the scenario when the transmission link is subject to $(D_{B}, L_{S})$, the outage probability is 0.2 at SNR = 14 dB. However, to maintain the same level of outage (0.2), the communication link needs to satisfy higher SNR of at least 28 dB when the shadowing becomes dense ($D_{B}, D_{S}$). Therefore, we deduce that the shadowing effect dominates the blockage. In other words, our communication system is sensitive to the shadowing effect much more than the blockage density. \begin{figure}[H] \includegraphics[height=65mm, width=0.95\linewidth]{BlockageShodowing.eps} \caption{Impact of the shadowing and the blockage density on the outage probability at the legitimate receiver. } \end{figure} \begin{figure}[H] \includegraphics[height=65mm, width=0.95\linewidth]{OutageEve.eps} \caption{Outage probability at the eavesdropper versus distance from the jammer to the eavesdropper $r_{_{J,E}}$. } \end{figure} In Fig. 3, we studied the outage probability at $E$ versus a range of $r_{_{J,E}}\in$ [0 m 30 m], and we fixed $P_{_J}$ at 5 dB, $P_{_S}$ = 10 dB. Two different threshold scenarios were proposed: $\zeta$ = -8 dB and $\zeta$ = -2 dB. If we suppose that to ensure secure communication, the outage probability should be more than $10^{-2}$ for $\zeta$= -8 dB. In this case, $r_{_{J,E}}$ should be less than 1 m when the attacker is very close to the source ($r_{_{S,E}} $= 1 m). If the attacker starts to be 10 m away from $S$, the communication link is protected if $r_{_{J,E}} <$ 15 m. In the second scenario, when the security requirements necessitate $\zeta$ = -2 dB, the jammer should be at most 1.5 m away from the eavesdropper to accomplish an outage probability of more than $10^{-2}$ in case that $r_{_{S,E}}$ = 1 m. Both scenarios emphasize on the impact of the distance separating $J$ from $E$ in the critical case where $r_{_{S,E}} \leq $ 10 m. In those scenarios, the use of the $J$ will be very effective to protect the communication link within a short distance range. As $r_{_{J,E}}$ increases, the outage cannot decrease anymore and we have an outage floor. This is due to the impact of the path loss on the jamming signal, of course under the aforementioned fading model, power setups, and number of antennas at $J$. Hence, to increase the jamming range of $J$ and mitigate the path loss effect, we have to increase $P_{_S}$ and K, which will be discussed with further details in the Fig. 5. \begin{figure} \includegraphics[height=63mm, width=0.90\linewidth]{ErgodicEve.eps} \caption{Effect of the jamming power signal $P_{J}$ on the ergodic capacity at the eavesdropper end.} \end{figure} Fig. 4 demonstrates the impact of the jamming power generated by the jammer $P_{_J}$ on the ergodic capacity at the eavesdropper $E$ with respect to the source power $P_{_{S}}$. In the first scenario, by fixing $r_{_{J,E}}$ = 10 m, we can observe that increasing $P_{_J}$ has no effect on the ergodic capacity for low transmission power region $P_{_{S}}\in $[-10 dB, 8 dB], since the average capacity is already null because of the low values of $P_{_{S}}$. Hence, using a jammer will not make any difference within this power interval. However, for high $P_{_{S}}$ values such as 25 dB and by using $P_{_J}$ = -5 dB, the ergodic capacity reaches about 1.7 bps/Hz. Therefore, we need to increase the jamming power to at least $P_{_J}$ = 15 dB to decrease the average capacity to 0.95 bps/Hz and to $P_{_J}$ = 30 dB to achieve a rate = 0.05 bps/Hz. In the second scenario, when $r_{_{J,E}}$ = 70 m $> r_{_{S,E}}$ (the attacker is closer to the BS than to the jammer), we remark that for low range of $P_{_{S}}$, the influence of the AN signal is similar to the previous scenario. On the other hand, for a high $P_{_{S}}$, the difference between the rate corresponding to $P_{_J}$ = -5 dB and $P_{_J}$ = 15 dB is negligible, where we need at least 30 dB at the jammer to decrease the rate from 3.1 bps/Hz to 2.7 bps/Hz. Therefore, the communication could be secured with low jamming power if $r_{_{J,E}} <$ $ r_{_{S,E}}$, otherwise, we need to increase $P_{_{S}}$. \begin{figure} \includegraphics[height=63mm, width=0.9\linewidth]{SecrecyCapacity.eps} \caption{Average secrecy capacity for different number of antenna at the jammer.} \end{figure} Now, we are focusing of the relation between the study of the average secrecy capacity $\overline{C_{_s}}$ and the number of antennas K, at the jammer. As shown if Fig. 5 and by fixing the number of antennas at the BS to N = 4, we observe that the average secrecy capacity for different values of K are tightly close within the BS power $\in$ [-10dB 18 dB]. We note that as $P_{_S}$ increases, we start to notice the impact of K on the average secrecy capacity. If we need $\overline{C_{_s}}$ equal to at least 5.5 bps/Hz for $P_{_S}$ = 35 dB, we can only achieve 4 bps/Hz when we are not using any jammer. To satisfy such requirement, we need at least 4 antennas. Therefore, the number of antennas K, has a major impact on the performances of security scheme. \section{Conclusion} In this paper, we examined the PLS in the wireless vehicular network while the communication is subject to an eavesdropping attack. We proposed the use of a friendly jammer that will transmit AN to perturb the attacker's channel and decrease its SINR. As a channel model, we adopted the recently proposed Double $\kappa$-$\mu$ Shadowed Fading model, which provides a variety of fading distribution models. To evaluate the security performance, we studied the outage probability and the secrecy capacity with respect to special fading models such as Rician shadowed and Nakagami-m. The results showed that the jammer has a significant impact on the security performance where we can obtain a notable outage probability at the eavesdropping end, especially for short range distance separating the jammer from the attacker. Moreover, the plots demonstrated the improvement of the average secrecy capacity by equipping the jammer with massive number of antennas. \section*{Appendix} To prove that Nakagami-m distribution is a special case of the envelope of $\mathcal{D}(\cdot)$, we can refer to the envelope equation [\citenum{channel}, Eq. (5)] and substitute $c_{_l}= s_{_l}= \infty $, $\kappa_{_l}$ = 0 and $\mu_{_l}$ = m. Then we perform the following computation: \begin{equation} \begin{aligned} &\lim \limits_{\substack{c_{_l}\to\infty\\s_{_l}\to\infty}} { f}_{X_{_l}}(X) =\lim \limits_{\substack{c_{_l}\to\infty\\s_{_l}\to\infty}}\frac{2(s_{_l}-1)^{s_{_l}}c_{_l}^{c_{_l}}T_{_l}^{\mu_{_l}}X^{2\mu_{_l}-1}{\hat{X}}^{2s_{_l}}}{\left(T_{_l}X^{2}+(s_{_l}-1) \hat{X}^{2}\right)^{s_{_l}+\mu_{_l}}}\\ &\frac{(c_{_l}+\mu_{_l}\kappa_{_l})^{-c_{_l}}}{B(s_{_l},\mu_{_l})}\times_{2}F_{1}\left( c_{_l}, s_{_l}+\mu_{_l};\mu_{_l}; \frac{K_{_l}\mu_{_l}\kappa_{_l}X}{T_{_l}X+(s_{_l}-1)\hat{X}^{2}}\right)\\ &~~~~=\lim \limits_{s_{_l}\to\infty} 2m^{m}\left( \frac{s_{_l}-1}{\frac{mX^{2}}{\hat{X}^2}+s_{_l}-1}\right)^{s_{_l}}\frac{X^{2m-1}\Gamma(s_{_l}+m)}{\hat{X}^{2(s_{_l}+m)}\Gamma(s_{_l}+m)}\\ &~~~~= \frac{2}{\Gamma(m)}\left(\frac{m}{\hat{X}^{2}}\right)^{m}\exp{\left(-\frac{mX^{2}}{\hat{X}^2}\right)}X^{2m-1}, \end{aligned} \end{equation} where $X$ is the random variable, $m$ is the shape parameter, $\hat{X}$ is the root mean square (rms) of the signal envelope, $T_{_l}$ = m, $K_{_l}$ = 0, $_{2}F_{1}(\cdot,\cdot;\cdot;0)$ = 0 according to the identity: [\citenum{Mathematica}, 07.23.03.0001.01], and $\hat{X}^{2}$ is the controlling spread parameter. \bibliographystyle{IEEEtran}
proofpile-arXiv_059-15737
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} The fifth generation (5G) new radio (NR) access technology, introduced in Release 15 of the 3rd generation partnership project (3GPP), enables offering unique services for mobile broadband and ultra-reliable low-latency communications (URLLC) \cite{NR3GPP, NRthenew,5Gwireless}. With its deployment flexibility, wide range of spectrum availability, and ultra-lean design, 5G NR is able to effectively serve a variety of use cases with stringent requirements on data rate, latency and energy efficiency. NR has been designed to operate at frequency range 1 (FR1) from 410 MHz to 7.125 GHz and frequency range 2 (FR2) from 24.25 GHz to 52.6 GHz. In addition, NR introduces unique features such as flexible numerology (e.g., subcarrier spacing and slot duration) and dynamic time division duplex (TDD), thus making it suitable for various deployment scenarios. Meanwhile, NR physical channels and signals are designed in a way to meet the 5G performance requirements. For instance, compared to long-term evolution (LTE), several enhancements have been made in designing synchronization signals and physical downlink control channel (PDCCH). PDCCH carries downlink control information (DCI) which plays a key role in downlink (DL) and uplink (UL) scheduling, as well as other aspects such as power control, slot format indication, and preemption indication. Ensuring a robust performance for PDCCH requires careful considerations. One key system performance evaluation metric is the PDCCH blocking probability which indicates the percentage of user equipments (UEs) that cannot be scheduled by the network for receiving the DCI. Furthermore, the blocking probability impacts the latency which is a critical metric in many 5G use cases. Achieving a desired system performance requires minimizing the blocking probability. Note that blocking probability is a function of various network parameters such as number of UEs, size of the Control Resource Set (CORESET), PDCCH aggregation levels (ALs), and scheduling strategy. Therefore, in order to guarantee a minimum blocking probability, there is a need for in-depth evaluations of the impact of network parameters on the blocking probability. \subsection{Related work on NR PDCCH} In \cite{takeda2019}, the authors provide an overview of the 5G NR PDCCH by discussing physical layer structure of PDCCH, monitoring schemes, and DCI aspects. In \cite{Chen2}, the link-level performance of NR PDCCH is evaluated in terms of the block error rate (BLER). The work in \cite{Hamidi} studies the search space design for NR PDCCH while considering UE's PDCCH blind decoding (BD) and channel estimation capabilities. In \cite{Braun}, an overview of NR PDCCH as well as enhancement techniques for search space design (in particular PDCCH hash function) are presented. Moreover, the performance of the proposed techniques in \cite{Braun} are evaluated in terms of PDCCH blocking probability. While previous studies provide some specific results for PDCCH blocking probability, the literature lacks a comprehensive analysis on this metric considering a wide range of relevant network parameters. \subsection{Contributions} In this paper, we provide an in-depth analysis on the NR PDCCH blocking probability in a network with multiple UEs that need to be scheduled for receiving the PDCCH. In particular, we evaluate the impact of various parameters including number of UEs, CORESET size, PDCCH ALs and their distribution, number of PDCCH candidates, UE's capability, and scheduling strategy on the blocking probability. Our analysis demonstrates inherent tradeoffs and design insights for efficient network design in terms of PDCCH blocking probability. Specifically, one can minimize the blocking probability by properly adjusting the network parameters based on the scenario. The rest of this paper is organized as follows. In Section II, we provide the an overview of NR PDCCH. In Section III, we present the system model. Results and discussions are presented in Section IV and conclusions are drawn in Section V. \section{Overview of NR PDCCH} PDCCH carries downlink control information for one or a group of UEs for several purposes such as DL scheduling assignment, UL scheduling grant, power control, and preemption indication. In NR, different DCI formats for different purposes are supported. Different DCI formats may or may not have different sizes. The size of a DCI format depends on the DCI fields that support specific features. DCI is transmitted through PDCCH candidates which are located within CORESETs. Each CORESET can span over one, two, or three contiguous orthogonal frequency-division multiplexing (OFDM) symbols over multiple resource blocks (RBs), where each RB consists of 12 subcarriers. In the frequency domain, a CORESET spans over one or multiple chunks of 6 RBs \cite{ahmadi}. A PDCCH candidate is carried by 1, 2, 4, 8 or 16 control channel elements (CCEs). Each CCE is composed of 6 resource element groups (REGs), and each REG is 12 resource elements (REs) in one OFDM symbol. Note that, an RE is the basic resource unit in NR which consists of one subcarrier in one OFDM symbol. In Figure \ref{CORESET}, we provide an illustrative example for a CORESET with 36 RBs and one OFDM symbol consisting of 6 CCEs. Also, a REG bundle consists of multiple REGs where bundle size can be 2, 3, or 6, depending on the CORESET duration. Each CORESET is associated with a CCE-to-REG mapping which can be interleaved or non-interleaved. In the non-interleaved case, all CCEs in an AL are mapped in consecutive REG bundles of the associated CORESET. In the interleaved case, REG bundles of CCEs are distributed on the frequency domain over the entire CORESET bandwidth. In order to receive DCI, the UE needs to perform blind decoding as it is not aware of the exact position of the PDCCH candidate used by the network. PDCCH candidates which need to be monitored by UEs are configured using so-called search space (SS) sets with each SS being associated with one CORESET. In NR, there are two types of SS: 1) common SS (CSS) set, commonly monitored by a group of UEs, and 2) UE-specific SS (USS), monitored by a specific UE. Within a search space configuration, various PDCCH monitoring parameters such as number of candidates, and possible number of CCEs in each candidate can be set \cite{TS_38.331}. The number of CCEs used for a PDCCH candidate is referred to as an aggregation level (AL). In NR, different aggregation levels can be used for PDCCH transmissions. Currently, possible NR PDCCH ALs are {1, 2, 4, 8, 16}. A higher AL provides better coverage and is more suitable for larger cells and extreme coverage scenarios, at the cost of more CCEs and consequently more time-frequency resources. For each AL, the UE may need to monitor multiple candidates. In Figure \ref{Candidates}, we show an example of PDCCH candidates with ALs 4, 8, and 16 in a CORESET composing of 16 CCEs. To decode DCI, a UE performs blind decoding as it does not have explicit information about DCI size, AL, and the PDCCH candidate. In general, the number of blind decodes (BDs) depends on various factors such as the number of different DCI sizes, the number of ALs and the number of PDCCH candidates that need to be monitored for each AL. In order to limit the UE complexity and power consumption, there are limits on the maximum number of blind decoding and the number of non-overlapping CCEs for channel estimation per slot. The BD and CCE limits (for non-carrier aggregation) for 15/30/60/120 kHz subcarrier spacings (SCSs) are, respectively, 44/36/22/20 and 56/56/48/32 \cite{TS_38.213}. Next, we describe our system model used for the blocking probability evaluations. \begin{figure}[!t] \begin{center} \includegraphics[width=9cm]{CORESET2.pdf} \vspace{-0.2cm} \caption{ An illustration of a CORESET with 36 RBs, one symbol (6 CCEs).}\vspace{-0.02cm} \label{CORESET} \end{center} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=9cm]{Candidates3.jpg} \vspace{-0.3cm} \caption{ An illustration of PDCCH candidates of ALs 4, 8, and 16 in a CORESET with 16 CCEs.}\vspace{-0.02cm} \label{Candidates} \end{center} \end{figure} \section {System Model} Let $U$ be the number of UEs which need to be simultaneously scheduled by the network for receiving DCI. A gNB (i.e., 5G base station) uses a CORESET with $q$ RBs and $d$ symbol duration to schedule the UEs. In this case, the CORESET size in terms of number of CCEs is given by $C=\frac{q \times d}{6}$. The CCEs within the CORESET are indexed from 0 to $C-1$. The gNB can use different set of ALs for scheduling the UEs. For each UE a suitable AL can be adopted based on several factors including the performance requirements and link quality. We use $p_L$ to denote the probability of using AL $L$ for the UEs in a cell. Specifically, $\mathbf{P}=[p_1, p_2, p_4, p_8, p_{16}]$ indicates the distribution of ALs 1, 2, 4, 8, and 16. The position of different PDCCH candidates for each AL is determined using a hash function \cite{TS_38.213}. Let $l_{k,i}$ be the index of $(i+1)^{\text{th}}$ CCE of candidate $k$, where $ i \in \{0,..., L-1\}$. Therefore, CCE indices for candidate $k$ with AL $L$ (i.e., $L$ CCEs) are: $l_{k,0},..., l_{k,L-1}$. In a search space set associated with a CORESET (with index $p$) in slot $t$, the CCE indices for PDCCH candidate $k$ are determined based on the following hash function (without carrier aggregation) \cite{TS_38.213}: \begin{equation}\label{hash} {l_{k,i}} = L\left[ {\left( {{Y_{p,t}} + \left\lfloor {\frac{{kC}}{{LM}}} \right\rfloor } \right)\it{mod} \left\lfloor {\frac{C}{L}} \right\rfloor } \right] + i, \end{equation} where $\left\lfloor . \right\rfloor$ is the floor function and $\it{mod}$ represents the modulo operation. $M$ is the number of PDCCH candidates for AL $L$, and $ k \in \{0,..., M-1\}$ is the index of a PDCCH candidate with AL $L$. Moreover, $Y_{p,t}$ is a constant value which is 0 for a CSS, and for a USS is given by \cite{TS_38.213}: \begin{equation}\label{Yp} {Y_{p,t}} = \left( {{A_p}{Y_{p,t-1}}} \right)\it{mod}\, (\text{65537}), \end{equation} where for the first slot (i.e., $t=0$), we have $Y_{p,-1}=n_{RNTI}=C_{RNTI} \ne 0$, with $C_{RNTI}$ being a unique identification number for each UE. $A_p=39827, 39829$, or $39839$, respectively, for $p$ $mod$ $3= 0, 1,$ \text{or} $2$, where $p$ is the CORESET index. From (\ref{hash}), we can see that the index of the first CCE of candidates with AL $L$ can be 0, $L$, $2L$, etc., as also illustrated in Figure \ref{Candidates} for $L=4$. The gNB can use different PDCCH candidates within the CORESET for scheduling different UEs. In this case, the blocking occurs for a UE when there is no fully free (i.e., non-overlapped) PDCCH candidates available for scheduling that UE. PDCCH blocking probability is defined as the probability that all PDCCH candidates configured for a UE to monitor are blocked by candidates used by other UEs. That is, the blocking probability is the ratio between the number of the blocked UEs over the number of all UEs that need to be scheduled, as written below: \begin{equation} B=\frac{\text{Number of blocked UEs}}{U}, \end{equation} with $U$ being the total number of UEs to be scheduled. Note that the blocked UEs need to be scheduled at another PDCCH opportunity. As an example provided in Figure \ref{Block}, UE 2 (AL 4) is successfully scheduled while there is no non-overlapped candidates available for UE 1 (AL 4) and UE 3 (AL 2), thus one of them will be blocked. In this case, the blocking probability is $B=1/3$. In general, the PDCCH blocking probability is a complicated function of various parameters including number of UEs, CORESET size, ALs and their distribution, the number of candidates for each AL, and UE capability in terms of supported BD and CCE limits. Moreover, in a general case, there is no closed-form expression for the PDCCH blocking probability. Next, we investigate the impact of various parameters on the PDCCH blocking probability. \begin{figure}[!t] \begin{center} \includegraphics[width=7cm]{Blocking2.pdf} \vspace{-0.03cm} \caption{ Example of PDCCH blocking in a CORESET.} \label{Block} \end{center} \end{figure} \section{Simulation Results and Analysis} In this section, we provide simulation results for blocking probability evaluations while analyzing the effect of different parameters. Specifically, we investigate the impact of number of UEs, CORESET size, number of candidates, ALs and their distribution, UE capability, and scheduling strategy on the blocking probability. We focus on a USS and Monte Carlo simulations are performed over 10000 iterations. \subsection{Impact of Number of UEs} In order to evaluate the effect of number of UEs to be scheduled ($U$) on the blocking probability, we consider a CORESET of size 54 CCEs (e.g., a CORESET with 108 RBs and 3 symbols). Also, we consider ALs [1, 2, 4, 8, 16], with distribution [0.4, 0.3, 0.2, 0.05, 0.05]. For each UE, the number of PDCCH candidates for ALs [1, 2, 4, 8, 16] are, respectively, [6, 6, 4, 2, 1]. In Figure \ref{UE_number}, we show how the blocking probability varies by changing the number of UEs. As expected, the blocking probability increases when the number of UEs increases. Since more UEs are scheduled within a given CORESET, there will be a higher probability that the gNB does not find an available PDCCH candidate for a UE, thus resulting in a higher blocking probability. For example, Figure \ref{UE_number} shows that by doubling the number of UEs from 15 to 30, the blocking probability increase from 0.06 to 0.27, corresponding to an increase by a factor of 4.5. \begin{figure}[!t] \begin{center} \includegraphics[width=9cm]{./Figures/UE_number.eps} \vspace{-0.3cm} \caption{ Blocking probability versus number of UEs to be scheduled.}\vspace{-0.02cm} \label{UE_number} \end{center} \end{figure} \subsection{Impact of CORESET Size} The CORESET size can significantly affect the blocking probability. Figure \ref{CORESET_num} shows the blocking probability as a function of the CORESET size for $U=20$ UEs. As we can see, the blocking probability is decreasing by increasing the CORESET size. With a larger CORESET more CCEs and PDCCH candidates are available for scheduling the UEs. In addition, the scheduler has more flexibility for allocating PDCCH candidates to the UEs. From Figure \ref{CORESET_num} we can see that the blocking probability can reduced from 0.36 to 0.1 by increasing the number of CCEs in the CORESET from 30 to 60. Note that the impact of further increasing the CORESET size is minimal as almost all UEs can be successfully scheduled. \begin{figure}[!t] \begin{center} \includegraphics[width=9cm]{./Figures/CORESET.eps} \vspace{-0.3cm} \caption{ Blocking probability versus CORESET size (number of CCEs).}\vspace{-0.02cm} \label{CORESET_num} \end{center} \end{figure} \subsection{Impact of Number of PDCCH Candidates} The number of PDCCH candidates for different ALs is another important factor. In NR, the number of PDCCH candidates can be configurable for each aggregation level among $\{0, 1, 2, 3, 4, 5, 6, 8\}$ in the USS \cite{ahmadi, TS_38.213}. Note that for each UE, the locations of candidates are determined based on (\ref{hash}) and (\ref{Yp}), thus, different UEs have different CCEs mapped to a candidate. Here, we separately evaluate the impact of number of candidates for AL 1, AL 2, and AL 4. To this end, we only change the number of candidates for one of the ALs, while setting the number of candidates for other ALs to 1. The AL distribution is [0.4, 0.3, 0.2, 0.05, 0.05] for ALs [1, 2, 4, 8, 16]. Figure \ref{Candidate_num} shows that increasing the number of PDCCH candidates for each AL results in a lower blocking probability. With more PDCCH candidates, the gNB has more flexibility to avoid overlapping between candidates of different UEs, thus reducing the blocking probability. For instance, by increasing the number of candidates from 2 to 6 in this figure, we can observe the blocking probability reduction of 20\%, 30\%, and 17\%, respectively, for ALs 1, 2, and 4. Also, by increasing the number of candidates in Figure \ref{Candidate_num}, we see a higher blocking probability reduction for AL 2, compared to ALs 1 and 4. This is because, considering the AL distribution, the overall impact of AL 2 on the blocking probability is more than that of ALs 1 and 4. We note that, while having more PDCCH candidates is beneficial for blocking probability reduction, it increases the number of BDs and CCE monitoring which can increase the UE complexity and power consumption. This shows a tradeoff between blocking probability and UE complexity/power consumption when increasing the number of PDCCH candidates. \begin{figure}[!t] \begin{center} \includegraphics[width=9cm]{./Figures/CandidateAL1,2,4.eps} \vspace{-0.3cm} \caption{ Blocking probability versus the number of PDCCH candidates for an AL (20 UEs and CORESET size 54 CCEs).}\vspace{-0.02cm} \label{Candidate_num} \end{center} \end{figure} \subsection{Impact of ALs} As discussed earlier, a higher AL provides a better coverage at the cost of using more CCEs. Here, we primarily evaluate the effect of each AL on the blocking probability. Here, for the sake of evaluation, we consider using only one of the ALs among $\{1, 2, 4, 8, 16\}$ in each scenario. That is, in each scenario only one AL is used the probability of 1. Here, the number of PDCCH candidates for ALs 1, 2, 4, 8, and 16 are, respectively, 6, 6, 4, 2, and 1. For example, in case of AL 1, the network only configures 6 candidates for each UE to monitor (and other ALs are not monitored). As we can see, using a higher AL leads to a higher blocking portability. Consequently, in order to guarantee a specific blocking probability, a smaller number of UEs can be simultaneously scheduled with a higher AL. The results in Figure \ref{ALs_impact} show that to maintain the blocking probability below 0.2, the maximum possible number of UEs to be scheduled with ALs 2, 4, 8, and 16 is 33, 16, 6, and 2, respectively. \begin{figure}[!t] \begin{center} \includegraphics[width=8.2cm]{./Figures/ALs.eps} \vspace{-0.3cm} \caption{ Blocking probability for different ALs (CORESET size = 54 CCEs).}\vspace{-0.02cm} \label{ALs_impact} \end{center} \end{figure} \subsection{Impact of AL Distribution} \label{CoverageCond} Note that the distribution of ALs can be determined based on the signal-to-interference-plus-noise ratio (SINR) distribution of UEs (can be obtained e.g., from system-level simulations) and PDCCH link-level performance with different ALs. In fact, suitable ALs are used for UEs to meet the PDCCH performance requirements and one can find how ALs are distributed in a CORESET. For our evaluation in this section, we consider three scenarios corresponding to good, medium, and poor coverage. Specifically: \begin{itemize} \item Good coverage: most of UEs are in good coverage and require low ALs (i.e., ALs 1 and 2), with ALs distribution [0.5, 0.4, 0.07, 0.02, 0.01]. \item Medium coverage: most of UEs are in medium coverage and require medium ALs (i.e., AL 4), with ALs distribution [0.05, 0.2, 0.5, 0.2, 0.05]. \item Extreme coverage: most of UEs are in poor coverage and require high ALs (i.e., ALs 8 and 16), with ALs distribution [0.01, 0.02, 0.07, 0.4, 0.5]. \end{itemize} The CORESET size is 54 CCEs and the number of PDCCH candidates for ALs [1, 2, 4, 8, 16] are [6, 6, 4, 2, 1]. Figure \ref{AL_dist} shows that the blocking probability is lower for better coverage conditions. The ALs distribution depends on the coverage condition. As the coverage condition gets worse, it is more likely that higher ALs are used to meet the coverage requirements. This, in turn, increases the blocking probability. For example, for 20 UEs, the blocking probabilities for good, medium, and extreme coverage scenarios are 0.02, 0.38, and 0.72, respectively. \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{./Figures/Al_distribution.eps} \vspace{-0.3cm} \caption{ Blocking probability for different AL distributions.}\vspace{-0.02cm} \label{AL_dist} \end{center} \end{figure} \subsection{Impact of UE's Capability} In this section, we analyze the impact of UE's capability in terms of BD/CCE limits on the blocking probability. In general, when BD/CCE limits is reduced, the UE can monitor a fewer number of PDCCH candidates per slot. This can also limit the scheduling flexibility and increases the blocking probability. For the evaluation of reduced BD limits, we consider the following cases, assuming that UE is configured with the maximum number of PDCCH candidates: \begin{itemize} \item Reference case: we assume that the UE is configured to monitor [6, 6, 4, 2, 1] PDCCH candidates for ALs [1, 2, 4, 8, 16]. \item Reduced BD case $A$: the UE is configured to monitor [3, 3, 2, 1, 1] PDCCH candidates for ALs [1, 2, 4, 8, 16]. In this case, the BD limit is reduced by around 50\% compared to the reference case. \item Reduced BD, case $B$: the UE is configured to monitor [1, 1, 1, 1, 1] PDCCH candidates for ALs [1, 2, 4, 8, 16]. In this case, the BD limit is reduced by around 75\% compared to the reference case. \end{itemize} We consider ALs distribution [0.4, 0.3, 0.2, 0.05, 0.05]. Figure \ref{BD_reduction} shows that the blocking probability increases by reducing the BD limit. For instance, for a CORESET size of 54 CCEs, the blocking probability increase by factor of 1.9 and 3 when reducing the BD limit by 50\% and 75\% compared to the reference case. \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{./Figures/BD_reduction.eps} \vspace{-0.3cm} \caption{ Blocking probability for different blind decoding (BD) capabilities.}\vspace{-0.02cm} \label{BD_reduction} \end{center} \end{figure} \subsection{Impact of Scheduling Strategy} Scheduling strategy is another impacting factor. In particular, it is important how the gNB allocates PDCCH candidates to different UEs. For instance, let us consider two scheduling strategies: \begin{itemize} \item Strategy 1: scheduler allocates UEs from low-to-high ALs. That is, UEs with low ALs are scheduled first (this strategy is adopted in our evaluations). \item Strategy 2: scheduler allocates UEs from high-to-low ALs. That is, UEs with high ALs are scheduled first. \end{itemize} Figure \ref{Strategy} shows that Strategy 1 outperforms Strategy 2 in terms of blocking probability. The reason is that Strategy 2 prioritizes UEs with high ALs that uses more CCEs, thus resulting in a higher blocking probability compared to Strategy 1. As an example, in Strategy 2, a UE using AL 16 may block 16 UEs using AL 1. Note that the impact of scheduling strategy becomes more crucial as the number of UEs increases. According to Figure \ref{Strategy}, for a small number of UEs (e.g., 10) the two scheduling strategies have the same performance. However, when the number of UEs increases to 40, the blocking probability using Strategy 2 is 1.9 times larger than the case with Strategy 1, in the CORESET with 54 CCEs. It should be noted that the performance of different scheduling strategies is also dependent on the CORESET size. \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{./Figures/SchedulingStrategies.eps} \vspace{-0.3cm} \caption{ Blocking probability for different scheduling strategies.}\vspace{-0.02cm} \label{Strategy} \end{center} \end{figure} \subsection{Design Problem: Minimum CORESET Size for a Blocking Probability Target} One key design problem is to determine the minimum CORESET size needed for meeting a blocking probability target. More specifically, given the number of UEs and the coverage condition, the network can properly determine the CORESET size to ensure the blocking probability does not exceed a specified threshold. We consider the medium coverage condition presented in Section \ref {CoverageCond} and find the minimum CORESET size that ensures the blocking probability below certain thresholds. Figure \ref{min_CORESET} shows the minimum required CORESET size for {5, 10, 15} UEs and different blocking probability targets {5\%, 10\%, 15\%, 20\%}. Clearly, the CORESET size must increase when more UEs are scheduled and a smaller blocking portability target needs to be met. For example, comparing two cases: i) 5 UEs and 20\% blocking probability, and ii) 15 UEs and 5\% blocking probability requirement, shows that CORESET size for the later case needs to be 5 times larger than that of the former case (i.e., from 20 CCEs to 100 CCEs). While a larger CORESET is beneficial for UE scheduling, it may not be desired from spectral and energy efficiency perspective. Therefore, the network should properly select the CORESET size based on the requirements and deployment scenarios. \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{./Figures/min_CORESET.eps} \vspace{-0.3cm} \caption{ Minimum required CORESET size for different number of UEs and blocking probability requirements.}\vspace{-0.02cm} \label{min_CORESET} \end{center} \end{figure} \section{Conclusions} In this paper, we have conducted a comprehensive analysis on the NR PDCCH blocking probability in a network with multiple UEs that need to be scheduled for receiving the PDCCH. We have evaluated the impact of a wide range of parameters and design factors on the blocking probability. In particular, we have analyzed the effect of number of UEs, CORESET size, PDCCH ALs and their distribution, PDCCH candidates, UE's capability, and scheduling strategy on the blocking probability. Our analysis along with simulation results have shown fundamental tradeoffs and design insights for efficient network design in terms of PDCCH blocking probability. In particular, based on the scenario, constraints, and system parameters (e.g., number of UEs, and CORESET size), one can adopt effective techniques to reduce the blocking probability. For instance, in a scenario with limited CORESET size and good coverage condition, efficient scheduling strategies and increasing the number of PDCCH candidates for small ALs can be effective for blocking probability reduction. \vspace{0.1cm} \def1.04{1.04} \bibliographystyle{IEEEtran}
proofpile-arXiv_059-15738
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} \subsection{Objective and main result} This article belongs to a research project in which we attempt to understand the effects of different quadratic terms coupled in diagonalized wave-Klein-Gordon system in $2+1$ dimensional space-time. In this article, we are interested in a special type of wave-Klein-Gordon system represented by the following two systems: \begin{subequations}\label{eq1-main} \begin{equation}\label{eq1a-main} \aligned &\Box u = A^{\alpha}\del_{\alpha}(v^2), \\ &\Box v + c^2v = B^{\alpha}v\del_{\alpha}u; \endaligned \end{equation} \begin{equation}\label{eq1b-main} \aligned &\Box u = A^{\alpha\beta}\del_{\alpha}\del_{\beta}(v^2), \\ &\Box v + c^2v = Buv. \endaligned \end{equation} \end{subequations} It can be noticed that on the right-hand-side of the wave equations, is the strong coupling terms term introduced in \cite{M-2020-strong}. We will establish global existence results for these systems with small localized regular initial data, more precisely, \begin{theorem}\label{thm-main} Consider the Cauchy problems associated to \eqref{eq1-main} with initial data posed on $\{t=2\}$ and compactly supported in $\{|x|<1\}$: \begin{align*} &v(2,x) = v_0(x),\quad \del_tv(2,x) = v_1(x) \\ &u(2,x) = u_0(x),\quad \del_tu(2,x) = u_1(x). \end{align*} There exists an integer $N\geq 9$ and a positive constant $\vep_0>0$ determined by the system and $N$, such that for all $0\leq \vep\leq \vep_0$, if \begin{equation} \|u_0\|_{H^{N+1}} + \|v_0\|_{H^{N+1}} + \|u_1\|_{L^2(H^N)} + \|v_1\|_{H^N}\leq \vep, \end{equation} then the local-in-time solution of \eqref{eq1-main} associated with such initial data extends to time infinity. \end{theorem} The research on \eqref{eq1a-main} is motivated by a stability problem of a type of totally geodesic wave map. In \cite{Ab-2019} the following system was formulated: \begin{equation}\label{eq1-wave-map} \begin{aligned} &-\Box \phi^1 = -2\sum_{k=2}^n\phi^k \del_1\phi^k + \text{h.o.t} \\ &-\Box \phi^k - \phi^k = 2\phi^k \del_1u + \text{h.o.t.}, \quad k=2,\cdots, n \end{aligned} \end{equation} where $u$ and $\phi^k$ are scalar functions defined in $\RR^{2+1}$. Relied on this formulation, a global stability result on wave map in $3+1$ and higher dimension was established in \cite{Ab-2019}. The cases in lower dimension was suggested to be open problems therein. In this article we will give a preliminary answer to the $2+1$ dimensional case (Theorem \ref{thm-wave-map}). In Section \ref{sec-conclusion-wave-map} we sketch the geometric background of \eqref{eq1-wave-map}. Detailed discussions on the formulation of \eqref{eq1-wave-map} can be found in \cite{Ab-2019} and for general review on wave maps, one may read \cite{SS98} and \cite{Kri07}. The research on \eqref{eq1b-main} is motivated by the global stability problem of Klein-Gordon-Zakharov system: \begin{equation}\label{eq1-Zakharov} \aligned & \Box E^a + E^a = -nE^a,\quad a = 1,2, \\ & \Box n = \Delta \big(|E^1|^2 + |E^2|^2\big), \endaligned \end{equation} where $n, E^a$ are scalar functions. The Zakharov equation was introduced in \cite{Zakharov-1972}. It describes a type of oscillation of a plasma. The Klein-Gordon-Zakharov system is a typical wave-Klein-Gordon system. The global stability result in $3+1$ space-time was established in \cite{Ozawa-1995} with Fourier-analytic method and latter in \cite{Tsutaya-1996} via vector field method. This result is then revisited and improved in many context. The main challenge of regarding wave-Klein-Gordon system comes form the lack of scaling invariance of the Klein-Gordon equation. See \cite{LM1} for a detailed explanation. Recently, S. Dong \cite{Dong-2020-2} established the global stability result in $2+1$ space-time with a special type of initial data. More precisely, Dong's method shows that, if there exists a compactly supported function $n^{\Delta}$ such that $\Delta n = n^{\Delta}$ on the initial slice, then with suitable assumptions on the regularity and smallness of the initial, the associated local-in-time solution extends to time infinity. His method is based on hyperboloidal foliation combined with a weighted energy estimate (called ``ghost weight''). In this article, as we have showed in the statement of Theorem \ref{thm-main}, we managed to establish a global stability result for general initial data in the small-localized-regular regime. \subsection{Main difficulties and strategy of proof} As explained in \cite{M-2020-strong}, in $\RR^{2+1}$, the main difficulty concerning the strong coupling terms, i.e., pure Klein-Gordon quadratics in wave equation, is that they destroy ``completely'' the conformal invariance of the wave equation (which supplies better decay and energy bounds). It seems to be impossible to establish uniform or slowly increasing conformal energy bound on wave component. Then one will face the insufficiency of the so-called principle decay. See in \cite{M-2020-strong} for a detailed explanation. Roughly speaking, in the case of strong coupling, one can only expect uniform standard energy bound. This bound leads to (via Klainerman-Sobolev inequality) the following decay \begin{equation}\label{eq3-04-10-2020} |\del u|\simeq s^{-1}\simeq (|t-r|+1)^{-1/2}t^{-1/2} \end{equation} which will not be sufficient to close the bootstrap argument. Fortunately, in the present case the strong couplings are in divergence form. This motivates us to ``integrate'' the wave equation, i.e, regarding the ``primitives'' of the wave component instead of it-self. The advantage of this strategy is that, the primitives also satisfy a wave equation (again with strong couplings), and the wave component is regarded as derivative of these primitives. Then the gradient of the wave component coupled in Klein-Gordon equation becomes components of Hessian forms of the primitives. As explained in \cite{M-2020-strong} (see also Proposition \ref{prop1-14-08-2020} in detail), Hessian form of a solution to wave equation enjoys better decay and energy bounds in the sens of principle decay and this will bring us not a little convenience. Here we only show an example. Compared with \eqref{eq3-04-10-2020}, when the standard energy on hyperboloid is uniformly bounded, \begin{equation}\label{eq4-04-10-2020} |\del\del u|\simeq (s/t)^{-1}s^{-2} + (s/t)^{-1}|\Box u| \simeq (|t-r|+1)^{-3/2}t^{-1/2} + t^{1/2}(1+|t-r|)^{-1/2}|\Box u| \end{equation} where $|\Box u|$ is quadratic by applying the wave equation and can be expected to have sufficient decay. Comparing \eqref{eq4-04-10-2020} with \eqref{eq3-04-10-2020}, the improvement only occurs deep in the right-cone $\{r<(1-\delta)t\}$. However, this is already sufficient in order to get integrable $L^2$ bounds on $v\del\del u$. More precisely, the Klein-Gordon component enjoys fast conical bounds: $$ \|(s/t)^{-2}v\|_{L^2(\Hcal_s)}\lesssim \Ecal^N(s,v)^{1/2} $$ for $N$ sufficiently large (see in detail in the proof or observe it roughly via Proposition \ref{prop1-fast-kg}). This additional $(s/t)^{-2}$ weight offsets the $(s/t)^{-1}$ conical decay in \eqref{eq4-04-10-2020}. Then (roughly) one obtains \begin{equation} \|v\del\del u\|_{L^2(\Hcal_s)}\lesssim s^{-2}\|(s/t)|v|\|_{L^2(\Hcal_s)}\lesssim s^{-2}\Ecal^N(s,v)^{1/2} \end{equation} with $s^{-2}$ integrable with respect to $s$. With this observation on divergence $\rightarrow$ primitive $\rightarrow$ Hessian form, we will be able to treat some originally non-integrable quadratic terms. However, writing the system with primitives is not a gratis trick. As we will see in the following analysis, although a primitive of wave component also resolves a wave equation, the initial data can not be easily constructed. To overcome this we consider a ``modified'' primitive instead, which is the primitive shifted by a solution to a free-linear wave equation. In Section \ref{sec-reformulation}, the system \eqref{eq1-main} will be reformulated with these shifted primitives and this leads to an auxiliary systems in the form of \eqref{eq-main}. In subsection \ref{subsec-auxi-structure} we will give a more detailed investigation on the structure of this type of system. The present article is roughly compose by three parts. Section \ref{sec-tech} forms the first part in which we prepare all analytical tools. The second part composed by Section \ref{sec-reformulation} and \ref{sec-bootstrap} in which we establish global existence result on \eqref{eq1-main} and apply this on \eqref{eq1-Zakharov}. The last part, containing Section \ref{sec-conclusion-wave-map} and \ref{sec-wave-maps-proof}, is dedicated to the stability result of totally geodesic wave map in which we regard the full system formulated in \cite{Ab-2019}. The proof is quite similar to that of \eqref{eq1-main} in Section \ref{sec-bootstrap}. But due to the higher-order terms and some other structures, neither can be seen as a special case of the other. \section{Recall of some technical tools}\label{sec-tech} In this section we are going to recall some useful tools in the hyperboloidal foliation method. We will start with the basic notations of the frames, vector fields and the high-order derivatives in the first subsection. Then we recall / reformulate some estimates based on the linear structure of wave / Klein-Gordon equation in the following two subsections. \subsection{Basic notation and calculus within hyperboloidal frame work} \paragraph*{Frames and vector fields.} We are interested in the foliation of the interior of the light cone $\Kcal \coloneqq \{(t,x)|r < t-1\}\subset \RR^{2+1}$ where $(t,x)=(t,x^a)=(t,x^1,x^2)$ is the Cartesian coordinates and $r = \sqrt{|x^1|^2+|x^2|^2}$. Then the foliation is performed with $\Hcal_s \coloneqq \{(t,x)| t = \sqrt{s^2+r^2}\}$ as following: \begin{align*} \Hcal_{[s_0,s_1]} &\coloneqq \bigcup_{s_0\leq s\leq s_1}(\Hcal_s\cap \Kcal) =\{(t,x)|r< t-1, (s_0)^2\leq s^2 \leq (s_1)^2\}, \end{align*} and \begin{align*} \Hcal_{[s_0,\infty]} &\coloneqq \bigcup_{s\geq s_0}(\Hcal_s\cap \Kcal) =\{(t,x)|r< t-1, s^2\geq(s_0)^2 \}. \end{align*} We recall the semi-hyperboloidal frame introduced in \cite{LM1} \footnote{Throughout this article, Greek indices taking values in $\{0,1,2\}$ while Latin indices taking values in $\{1,2\}$.}: $$ \delu_0:=\del_t,\quad \delu_a := \delb_a = (x^a/t)\del_t + \del_a, $$ where $\delb_a$ denotes the vector fields tangent to the hyperboloids $\Hcal_s$ (which are called hyperbolic derivatives). By a direct computation, we have the transition matrices between this frame and the natural frame $\{\del_{\alpha}\}$ as follows: \begin{equation}\label{eq semi-frame} \Phiu_{\alpha}^{\beta} := \left( \begin{array}{ccc} 1 &0 &0 \\ x^1/t &1 &0 \\ x^2/t &0 &1 \end{array} \right), \quad \Psiu_{\alpha}^{\beta} := \left( \begin{array}{ccc} 1 &0 &0 \\ -x^1/t &1 &0 \\ -x^2/t &0 &1 \end{array} \right) \end{equation} with $$ \delu_{\alpha} = \Phiu_{\alpha}^{\beta}\del_{\beta},\quad \del_{\alpha} = \Psiu_{\alpha}^{\beta}\delu_{\beta}. $$ Hence, assume that $T = T^{\alpha\beta}\del_{\alpha}\otimes\del_{\beta}$ be any 2-tensor defined in $\Kcal$ or its subset, it can be also represented by $\{\delu_{\alpha}\}$ as following: $$ T = \Tu^{\alpha\beta} \delu_{\alpha}\otimes\delu_{\beta} \quad\text{with}\quad \Tu^{\alpha\beta} = T^{\alpha'\beta'}\Psiu_{\alpha'}^{\alpha}\Psiu_{\beta'}^{\beta}. $$ \paragraph*{High-order derivatives.} Recall that in the region $\Kcal$, we introduced the following Lorentzian boosts in \cite{M-2020-strong}: $$ L_a = x^a\del_t + t\del_a,\quad a = 1, 2 $$ and the following notation of high-order derivatives: let $I= (i_1,i_2,\cdots, i_m)$, $J= (j_1,j_2,\cdots, j_n)$ be multi-indices taking values in $\{0,1,2\}$ and $\{1,2\}$ respectively, and then we define $$ \del^IL^J = \del_{i_1}\del_{i_2}\cdots \del_{i_m}L_{j_1}L_{j_2}\cdots L_{j_n} $$ to be an $(m+n)-$order derivative. Let $\mathscr{Z} = \{Z_i|i=0,1,\cdots, 6\}$ be a fimily of vector fields, where $$ Z_0 = \del_t,\quad Z_1=\del_1,\quad Z_2 = \del_2,\quad Z_3 = L_1,\quad Z_4=L_2,\quad Z_5 = \delu_1,\quad Z_6 = \delu_2. $$ The following notation: $$ Z^I := Z_{i_1}Z_{i_2}\cdots Z_{i_N} $$ denotes a high-order derivative of order $N$ on $\mathscr{Z}$ with milti-index $I = (i_1,i_2,\cdots, i_N)$ with $i_k\in \{1,2,c\dots, 6\}$. If there are at most $a$ partial derivatives, $b$ Lorentzian boosts and $c$ hyperbolic derivatives in $Z^I$, then $I$ is said to be of type $(a,b,c)$. We then recall the following notation introduced in \cite{M-2020-strong}: $$ \mathcal{I}_{p,k} = \{I| I \text{ is of type }(a,b,0)\text{ with }a+b \leq p, b\leq k \}, $$ and the following quantities that will be applied in order to control varies of high-order derivatives later: \begin{equation}\label{eq1 notation} \aligned |u|_{p,k} &:= \max_{K\in \mathcal{I}_{p,k}}|Z^K u|,\quad &&|u|_p := \max_{0\leq k\leq p}|u|_{p,k}, \\ |\del u|_{p,k} &:= \max_{\alpha=0,1,2}|\del_{\alpha} u|_{p,k}, &&|\del u|_p := \max_{0\leq k\leq p}|\del u|_{p,k}, \\ |\del^m u|_{p,k} &:= \max_{|I|=m}|\del^I u|_{p,k}, &&|\del^m u|_p := \max_{0\leq k\leq p}|\del^I u|_{p,k}, \\ |\dels u|_{p,k} &:= \max\{|\delu_1 u|_{p,k},|\delu_2u|_{p,k}\}, &&|\dels u|_p := \max_{0\leq k\leq p}|\dels u|_{p,k}, \\ |\del\dels u|_{p,k} &:=\max_{a,\alpha} \{|\delu_a\del_{\alpha} u|_{p,k},|\del_{\alpha}\delu_a u|_{p,k}\}, &&| \del\dels u|_p :=\max_{0\leq k\leq p}| \del\dels u|_{p,k}, \\ |\dels\dels u|_{p,k} &:=\max_{a,b} \{|\delu_a\delu_b u|_{p,k}\}, &&| \dels\dels u|_p :=\max_{0\leq k\leq p}| \dels\dels u|_{p,k}. \endaligned \end{equation} \paragraph*{Standard and Conformal energy estimate on hyperboloids.} There are two types, standard and conformal, of the energies defined in the hyperboloidal foliation framework. The standard energy, obtained by the standard multiplier $\del_t u$, is defined as follows in the Minkowski metric: \begin{equation}\label{standard energy} E_{0,c}(s,u):= \int_{\Hcal_s}e_{0,c}[u]dx \end{equation} where the energy density \begin{equation}\label{density-standard} \aligned e_{0,c}[u]:=&|\del_tu|^2+\sum_a|\del_au|^2 + 2(x^a/t)\del_tu\del_au + c^2u^2 \\ =&\sum_a |\delu_a u|^2 + |(s/t)\del_tu|^2 + c^2u^2 \\ =&|\delu_{\perp}u|^2 + \sum_a|(s/t)\del_a u|^2 + \sum_{a<b}\big|t^{-1}\Omega_{ab}u\big|^2 + c^2u^2 \endaligned \end{equation} with $\delu_{\perp} := \del_t + (x^a/t)\del_a$. We denote by $e_0[u] = e_{0,c=0}[u]$. For standard energy, we have the following estimate (for proof, see for example \cite{LM1}): \begin{proposition}[Standard energy estimate]\label{prop 1 energy} We consider the $C^2$ solution $u$ to the following wave / Klein-Gordon equation $$ \Box u + c^2 u = F, $$ in the region $\Hcal_{[s_0,s_1]}$ and vanishes near the conical boundary $\del\Kcal = \{t=r-1\}$. Then the following energy estimate holds: \begin{equation}\label{ineq 3 prop 1 energy} \aligned E_{0,c}(s,u)^{1/2}\leq& E_{0,c}(2,u)^{1/2} + \int_2^s \|F\|_{L^2(\Hcal_\tau)}d\tau. \endaligned \end{equation} \end{proposition} While the conformal energy on hyperboloid $\Hcal_s$ is defined as follows: $$ E_2(s,u) := \int_{\Hcal_s}\Big(\sum_a|s\delu_a u|^2 + s^{-2}|K_2u + su|^2\Big)dx $$ where $K_2 = s^2(s/t)\del_t + 2sx^a\delu_a$ is the conformal multiplier. We also have an estimate for the energy of this type: \begin{proposition}[Conformal energy estimate on hyperboloids]\label{prop-conformal} Let $u$ be a sufficiently regular function defined in $\Hcal_{[s_0,s_1]}$, vanishes near the conical boundary $\del\Kcal = \{r=t-1\}$. Then the following estimate holds: \begin{equation}\label{eq8-15-06-2020} \Ec(s_1,u)^{1/2} \leq \Ec(s_0,u)^{1/2} + \int_{s_0}^{s_1}s\|\Box u\|_{L^2(\Hcal_s)}ds. \end{equation} \end{proposition} \noindent Unlike the standard energy, the conformal one does not directly control the derivative $\del_t u$ and $u$. Therefore, the following lemma is established in \cite{M4} in order to get the bound on $u$: \begin{lemma}\label{proposition 1 01-01-2019} Let $u$ be a $C^1$ function defined in $\Hcal_{[s_0,s_1]}$ and vanishes near $\del\Kcal$. Then \begin{equation}\label{eq 1 14-12-2018} \|(s/t)u\|_{L^2(\Hcal_{s_1})}\leq \|(s/t)u\|_{L^2(\Hcal_{s_0})} + C\int_{s_0}^{s_1}s^{-1}\Ec(s,u)^{1/2}ds. \end{equation} \end{lemma} Once $u$ is bounded, recalling $$ \|s^{-1}K_2 u + u\|_{L^2(\Hcal_s)} = \|s(s/t)\del_t u + 2x^a\delu_a u\|_{L^2(\Hcal_s)} $$ is bounded by $E_2(s,u)^{1/2}$, $\|(s/t)^2s\del_t u\|_{L^2(\Hcal_s)}$ is bounded by the following quantity: \begin{equation}\label{eq1-09-10-2020} F_2(s_0,s,u) = \|(s/t)u\|_{L^2(\Hcal_s)} + E_2(s,u)^{1/2} + \int_{s_0}^s\tau^{-1}E_2(\tau,u)^{1/2}d\tau \end{equation} The high-order version is defined as following: \begin{equation} \Fcal_2^{p,k}(s_0;s,u):= \max_{|I|+||J\leq p\atop |J|\leq k}F_2(s_0;s,\del^IL^J u),\quad \Fcal_2^N(s_0;s,u):= \max_{|I|+|J|\leq N}F_2(s_0;s,\del^IL^J u). \end{equation} A sketch on the proofs of this conformal energy estimate and Lemma \ref{proposition 1 01-01-2019} within flat background metric can be found in \cite{M-2020-strong}. \noindent For the convenience of discussion, we also introduce the following high-order energy: \begin{equation}\label{eq 3' 01-01-2019} \Ecal_{0,c}^{p,k}(s,u) := \max_{|I|+J\leq p\atop |J|\leq k}E_{0,c}(s,\del^IL^J u),\quad \Ecal_0^{p,k}(s,u) := \max_{|I|+|J|\leq p\atop |J|\leq k}E_0(s,\del^IL^J u), \end{equation} \begin{equation}\label{eq 3 01-01-2019} \Ecal_{0,c}^N(s,u) := \max_{|I|+|J|\leq N}E_{0,c}(s,\del^IL^J u),\quad \Ecal_0^N(s,u) := \max_{|I|+|J|\leq N}E_0(s,\del^IL^J u), \end{equation} \begin{equation}\label{eq 1 09-26-2020} \Ecal_2^N(s,u) := \max_{|I|+|J|\leq N}E_2(s,\del^IL^J u), \quad \Ecal_2^{p,k}(s,u) := \max_{|I|+|J|\leq p\atop |J|\leq k}E_2(s,\del^IL^J u). \end{equation} \paragraph*{Bounds of high-order derivatives with energies.} These bounds are established in \cite{M-2020-strong}: \\ - $L^2$ bounds: \begin{equation}\label{eq1-10-06-2020} \|(s/t)|\del u|_{p,k}\|_{L^2(\Hcal_s)} + \||\dels u|_{p,k}\|_{L^2(\Hcal_s)} + \|c|u|_{p,k}\|\leq C\Ecal_{0,c}^{p,k}(s,u)^{1/2}, \end{equation} \begin{equation}\label{eq5-10-06-2020} \|s|\del\dels u|_{p-1,k-1}\|_{L^2(\Hcal_s)} + \|t|\dels\dels u|_{p-1,k-1}\|_{L^2(\Hcal_s)} \leq C\Ecal_0^{p,k}(s,u)^{1/2}, \end{equation} \begin{equation}\label{eq2-10-06-2020} \aligned \|(s/t)^2s|\del u|_{p,k}\|_{L^2(\Hcal_s)}& + \|s|\dels u|_{p,k}\|_{L^2(\Hcal_s)} + \|(s/t)|u|_{p,k}\|_{L^2(\Hcal_s)} \\ \leq& C\Fcal_2^{p,k}(s_0;s,u), \endaligned \end{equation} \begin{equation}\label{eq6-10-06-2020} \aligned \|(s/t)s^2|\del\dels u|_{p-1,k-1}\|_{L^2(\Hcal_s)}& + \|st|\dels\dels u|_{p-1,k-1}\|_{L^2(\Hcal_s)} \\ \leq& C\Fcal_2^{p,k}(s_0;s,u). \endaligned \end{equation} \\ - $L^{\infty}$ bounds: \begin{equation}\label{eq3-10-06-2020} \aligned \|s|\del u|_{p,k}\|_{L^{\infty}(\Hcal_s)}& + \|t|\dels u|_{p,k}\|_{L^\infty(\Hcal_s)} + \|ct|u|_{p,k}\|_{L^{\infty}(\Hcal_s)} \\ \leq& C\Ecal_{0,c}^{p+2,k+2}(s,u)^{1/2}, \endaligned \end{equation} \begin{equation}\label{eq7-10-06-2020} \|st|\del\dels u|_{p-1,k-1}\|_{L^\infty(\Hcal_s)} + \|t^2|\dels\dels u|_{p-1,k-1}\|_{L^{\infty}(\Hcal_s)} \leq C\Ecal_0^{p+2,k+2}(s,u)^{1/2}, \end{equation} \begin{equation}\label{eq4-10-06-2020} \aligned \|(s/t)s^2|\del u|_{p,k}\|_{L^\infty(\Hcal_s)} + \|st|\dels u|_{p,k}\|_{L^\infty(\Hcal_s)} &+ \|s|u|_{p,k}\|_{L^\infty(\Hcal_s)} \\ \leq& C\Fcal_2^{p+2,k+2}(s_0;s,u), \endaligned \end{equation} \begin{equation}\label{eq8-10-06-2020} \aligned \|s^3|\del\dels u|_{p-1,k-1}\|_{L^{\infty}(\Hcal_s)}& + \|st^2|\dels\dels u|_{p-1,k-1}\|_{L^{\infty}(\Hcal_s)} \\ \leq& C\Fcal_2^{p+2,k+2}(s_0;s,u). \endaligned \end{equation} We also need the following bound on products and null quadratic forms in $\Hcal_{[s_0,s_1]}$. Firstly, \begin{equation}\label{eq12-10-06-2020} |AB|_{p,k}\leq C|A|_{p,k}|B|_{p_1,k_1} + C|A|_{p_1,k_1}|B|_{p,k} \end{equation} where $p_1 = [p/2], k_1 = [k/2]$, $A,B$ sufficiently regular in $\Hcal_{[s_0,s_1]}$ and $C$ a constant determined by $p$. Furthermore, let $A$ be a (constant coefficient) quadratic null form, i.e., $$ A^{\alpha\beta}\xi_{\alpha}\xi_{\beta} = 0,\quad \forall \xi_0^2 - \xi_1^2 - \xi_2^2 = 0. $$ Then \begin{equation}\label{eq13-10-06-2020} \aligned |A^{\alpha\beta}\del_{\alpha}u\del_{\beta}v|_{p,k}\leq& C|A|(s/t)^2|\del u|_{p_1,k_1}|\del v|_{p,k} + C(s/t)^2|A||\del u|_{p,k}|\del v|_{p_1,k_1} \\ &+ C|A||\dels u|_{p_1,k_1}|\del u|_{p,k} + C|A||\dels u|_{p,k}|\del v|_{p_1,k_1} \\ &+C|A||\del u|_{p_1,k_1}|\dels v|_{p,k} + C|A||\del u|_{p,k}|\dels v|_{p_1,k_1} \endaligned \end{equation} where $|A| = \max_{\alpha,\beta}|A^{\alpha\beta}|$. This is established in \cite{LM1}. For a proof, see for example in \cite{M-2020-strong}. \subsection{Linear estimates on wave equation} \paragraph*{Bounds on Hessian form of wave component.} We are now at a state to recall various bounds of the wave and the Klein-Gordon equation due to their linear structure. For the Hessian form, we have the coming proposition: \begin{proposition}\label{prop1-14-08-2020} Let $u$ be a function defined in $\Hcal_{[s_0,s_1]}$, sufficiently regular. Suppose that $|I|+|J|\leq p$ and $|J|\leq k$. Then \begin{equation}\label{eq1 lem Hessian-flat-zero} (s/t)^2|\del_\alpha\del_\beta \del^IL^J u| \leq C|\Box u|_{p,k} + Ct^{-1}|\del u|_{p+1,k+1}. \end{equation} \begin{equation}\label{eq2 lem Hessian-flat-zero} (s/t)^2|\del\del u|_{p,k} \leq C|\Box u|_{p,k} + Ct^{-1}|\del u|_{p+1,k+1}. \end{equation} \end{proposition} This is established in \cite{LM1}. A sketch of proof can be found in \cite{M-2020-strong}. \paragraph*{Decay bounds based on Poisson's formula.} By a direct calculation with the Poisson's formula, we have the following decay bounds on the free-linear wave equation: \begin{lemma}\label{lem1-13-06-2020} Let $u$ be the $C^2$ solution to the following Cauchy problem of free-linear wave equation: \begin{equation} \Box u = 0,\quad u(t_0,x) = u_0, \quad \del_tu(t_0,x) = u_1, \quad t_0\geq 2 \end{equation} with $u_0,u_1$ sufficiently regular and compactly supported in $\{|x|<t_0-1\}$. Suppose that $$ |u_0(x)| + |u_1(x)| + |\del u_0(x)|\leq C_I. $$ Then for $(t,x)\in\Kcal = \{r<t-1\}$ and $t\geq t_0$, \begin{equation}\label{eq1-17-08-2020} |u(t,x)|\leq CC_I t_0 s^{-1}\leq CC_It_0(t-r)^{-1/2}t^{-1/2}, \quad s=\sqrt{t^2-|x|^2}. \end{equation} \end{lemma} This is a classical result. In \cite{M-2020-strong} we showed a proof. \paragraph*{$L^{\infty}$ estimate on wave equation based on integration along hyperbolas.} We also need the following bounds to establish the sharp decay bounds without uniformly energy bounds. This is established in \cite{M-2020-strong}. We recall the following curves: $$ \aligned \gamma_{t,x}: \RR&\rightarrow \RR^{2+1} \\ \tau&\rightarrow \big(\gamma_{t,x}^0(\tau),\gamma_{t,x}^1(\tau),\gamma_{t,x}^2(\tau)\big) \endaligned $$ with $$ \gamma_{t,x}^0(\tau) = \tau,\quad \gamma_{t,x}^a(\tau) = (x^a/r)\left(\sqrt{\tau^2+\frac{1}{4}C_{t,x}^2} - \frac{1}{2}C_{t,x}\right) $$ where $$ C_{t,x} = \frac{t^2-r^2}{r}. $$ These are (time-like) hyperbolas with center at $(0,-\frac{x^a}{2r}C_{t,x})$ and hyperbolic radius $\frac{1}{2}C_{t,x}$. Then we recall the following estimate: \begin{proposition}\label{prpo2 wave-sharp} Let $u$ be a sufficiently regular function defined in $\Hcal_{[s_0,s_1]}$, vanishes near $\del \Kcal = \{r=t-1\}$. Then the following bound holds: \begin{equation}\label{eq1-29-05-2020} |s\del_t u(t,x)|\leq Cs_0\|\del_t u\|_{L^{\infty}(\Hcal_{s_0})} + C\bigg|\int_{s_0}^t W_{t,x}[u](\tau) e^{-\int_{\tau}^t P_{t,x}(\eta)d\eta} d\tau\bigg| \end{equation} where $$ W_{t,x}(\tau) := S^w[u]\Big|_{\gamma(\tau;t,x)} + \Delta^w[u]\Big|_{\gamma(\tau;t,x)} $$ and $$ P_{t,x}(\tau) := P\Big|_{\gamma(\tau;t,x)}. $$ with $$ \aligned &P(t,r) := \frac{t^2}{t^2+r^2}p(t,r) = \frac{t-r}{t^2+r^2}(1+(3r/2t)) \geq \frac{1}{4}(s/t)^2t^{-1}, \\ &S^w[u] := t^{1/2}(t-r)^{1/2}\frac{t^2\Box u}{t^2+r^2},\quad \Delta^w[u] := t^{1/2}(t-r)^{1/2}\frac{t^2\sum_a\delu_a\delu_a u}{t^2+r^2}. \endaligned $$ \end{proposition} \subsection{Linear estimate on Klein-Gordon equation} \paragraph*{Conical decay of Klein-Gordon component.} As explained before, one of the important techniques we applied in this paper is "paying conical for principle" (see \cite{M-2020-strong}), hence we need the following proposition describing the conical decay of Klein-Gordon component. \begin{proposition}\label{prop1-fast-kg} Let $v$ be a sufficiently regular solution to \begin{equation}\label{eq1 prop fast-KG} \Box v + c^2 v = f. \end{equation} Then \begin{equation}\label{eq2 prop fast-KG} c^2|v|_{p,k}\leq C(s/t)^2|\del v|_{p+1,k+1} + C|f|_{p,k}. \end{equation} \end{proposition} Here remark the factor $(s/t)^2$ in right-hand-side. This bound is closely related to the Proof of Proposition \ref{prop1-14-08-2020}. A sketch of proof can be found in \cite{M-2020-strong}. \paragraph*{$L^{\infty}-L^{\infty}$ estimate on Klein-Gordon component.} Now, we reformulate the $L^\infty-L^\infty$ estimate on Klein-Gordon component for the sharp decay. Before the main statement, we introduce the following curves: $$ \aligned \varphi_{t,x}: \RR &\rightarrow \{(t',x')\in \RR^{2+1}, t'>0\} \\ \lambda &\rightarrow (\lambda t/s,\lambda x/s) \endaligned $$ which are the half-lines from $(0,0)$ to $(t,x)$. They are the integral curves of $\Lcal = (s/t)^{-1}\big(\del_t + (x^a/t)\del_a\big)$. For each $(t,x)\in \Hcal_{[s_0,s_1]}$, there exists a $(t_0,x_0)$ such that $(t_0,x_0)\in \varphi_{t,x}$ and $(t_0,x_0)\in \Hcal_{s_0}^*\cup \del \Kcal$. Here $\Hcal_{s_0}^* = \Hcal_{s_0}\cap\Kcal$ is the part of $\Hcal_{s_0}$ in the cone $\Kcal$. Then we state the main result: \begin{proposition}\label{lem1'-01-09-2020} Suppose that $v$ is a $C^2$ solution to the following Klein-Gordon equation: \begin{equation}\label{eq'14-01-09-2020} \Box v + c^2(1-\omega)v = f \end{equation} in $\Hcal_{[s_0,s_1]}$, vanishes near $\del K$ with $s_0\geq 2$. Suppose that $\omega$ and $f$ $C^1$ functions defined in $\Hcal_{[s_0,s_1]}$, vanish near $\del\Kcal$ with $|w_0|\leq 1/2$. Then for $(t,x)\in \Hcal_{[s_0,s_1]}$ with $0\leq r/t\leq 3/5$ \begin{equation} \aligned s|v|(t,x) + &s|((s/t)\del_t + (x^a/s)\delu_a)v|(t,x) \\ \leq& Cs_0\sup_{\Hcal_{s_0}}\{|v| + |\del v|\} + C\int_{s_0}^s\lambda\big(|f| + \lambda^{-2}|v|_{2,2}\big)\big|_{\varphi_{t,x}(\lambda)} d\lambda \\ &+ C\int_{s_0}^s\big(\lambda|\del v| + \lambda|v| + |v|_{1,1}\big)|\del \omega|\big|_{\varphi_{t,x}(\lambda)}d\lambda \endaligned \end{equation} and for $(t,x)\in \Hcal_{[s_0,s_1]}$ with $3/5\leq r/t<1$, \begin{equation} \aligned s|v|(t,x)& + s|((s/t)\del_t + (x^a/s)\delu_a)v|(t,x) \\ \leq& C\int_{\lambda_0}^s\lambda\big(|f| + \lambda^{-2}|v|_{2,2}\big)\big|_{\varphi_{t,x}(\lambda)} d\lambda \\ &+ C(s/t)^{-1}\int_{\lambda_0}^s\big((s/t)\lambda|\del v| + \lambda|v| + |v|_{1,1}\big)\big((s/t)^2|\del \omega| + |\dels \omega|)\big|_{\varphi_{t,x}(\lambda)}d\lambda. \endaligned \end{equation} with $\lambda_0 = \sqrt{\frac{t+r}{t-r}}\geq \sqrt{2}(t/s)$. \end{proposition} \begin{proof}[Sketch of proof] Recall the following decomposition $$ \Box v = s^{-1}\big((s/t)\del_s + (x^a/s)\delu_a\big)^2(sv) - \frac{x^ax^b}{s^2}\delu_a\delu_b v - \sum_{a}\delu_a\delu_a v . $$ Then \eqref{eq'14-01-09-2020} can be written into the following form: \begin{equation}\label{eq'16-01-09-2020} \Lcal^2(sv) + c^2(1-\omega)sv = sf + s\big(s^{-2}x^ax^b\delu_a\delu_b + \sum_a\delu_a\delu_a \big)v \end{equation} where $\Lcal = (s/t)\del_t + (x^a/s)\delu_a = (s/t)^{-1}\big(\del_t + (x^a/t)\del_a\big)$. This can be regarded as an ODE of $sv$ along the integral curve of $\Lcal$, which are segments. Let $\varphi_{t,x}(\cdot)$ be one of its integral curve such that $\varphi_{t,x}(s) = (t,x)$ with $s = \sqrt{t^2-r^2}$. Then $$ \varphi^0_{t,x}(\lambda) = (t/s)\lambda,\quad \varphi^a_{t,x}(\lambda) = (x^a/s)\lambda. $$ Let $u$ be a sufficiently regular function defined in $\Hcal_{[s_0,s_1]}$, and $$ u_{t,x}(\lambda):= u|_{\varphi_{t,x}(\lambda)} = u\big((t/s)\lambda, (x^a/s)\lambda\big), $$ then $$ u'_{t,x}(\lambda) = \frac{d}{d\lambda}u_{t,x}(\lambda) = \Lcal u \big((t/s)\lambda, (x^a/s)\lambda\big) = (\Lcal u)|_{\varphi_{t,x}(\lambda)}. $$ With these observation, \eqref{eq'16-01-09-2020} is written as \begin{equation}\label{eq'17-01-09-2020} V_{t,x}''(\lambda) + c^2(1-\omega)V_{t,x}(\lambda) = \lambda \big(f + s^{-2}x^ax^b\delu_a\delu_bv + \sum_a\delu_a\delu_av \big)\big|_{\varphi_{t,x}(\lambda)} \end{equation} where $V_{t,x}(\lambda) : = (sv)|_{\varphi_{t,x}}(\lambda)$. Here we also remarked that for $(t',x')\in \varphi_{t,x}(\lambda)$ and $s' = \sqrt{|t'|^2 - |x'|^2}$, $$ s'|_{\varphi_{t,x}} = \lambda' $$ with $t' = \lambda' t/s, x' = \lambda' x/s$. Then we make an observation on the integral curves $\varphi_{t,x}$. They are half-lines form $(0,0)$ to $(t,x)\in \Hcal_{[s_0,s_1]}$. Recall that $(t_0,x_0)$ is the point where $\varphi_{t,x}$ enters $\Hcal_{[s_0,s_1]}$. Direct calculation shows that: \\ - when $0\leq r/t\leq 3/5$, $(t_0,x_0)\in \Hcal_{s_0}$. Let $(t_0,x_0) = \varphi_{t,x}(\lambda_0)$, then $\lambda_0=2$, \\ - when $3/5\leq r/t <1$, $(t_0,x_0)\in\del\Kcal = \{t=r+1\}$ and $\lambda_0 = \sqrt{\frac{t+r}{t-r}}\geq \sqrt{2}t/s$. Now for a fixed $(t,x)\in \Hcal_s^*\subset \Hcal_{[s_0,s_1]}$ we integrate \eqref{eq'17-01-09-2020}. Remark that that $|\omega|\leq 1/2$, the eigenvalues of the characteristic polynomial are purely imaginary and the eigenvectors are uniformly bounded. So by basic ODE theory we arrive at the following bound: \begin{equation}\label{eq9-08-10-2020} \aligned |V_{t,x}'(s)| + |V_{t,x}(s)|\leq& |V_{t,x}'(\lambda_0)| + |V_{t,x}(\lambda_0)| \\ &+ C\int_{\lambda_0}^s\lambda \big(|f| + |s^{-2}x^ax^b\delu_a\delu_bv| + \sum_a|\delu_a\delu_av|\big)\big|_{\varphi_{t,x}(\lambda)}d\lambda \\ & + C\int_{\lambda_0}^s(|V_{t,x}(\lambda)| + |V'_{t,x}(\lambda)|) |\Lcal \omega|\big|_{\varphi_{t,x}(\lambda)}d\lambda. \endaligned \end{equation} Then remark that \\ 1. $(s/t)$ and $(r/t)$ are constant along $\varphi_{t,x}$ and when $0\leq r/t\leq 3/5$, $4/5\leq s/t\leq 1$, i.e., we can omit all factors $(s/t)$ (regarded as $1$). \\ 2. When $0\leq r/t\leq 3/5$, $|V_{t,x}(\lambda_0)| + |V_{t,x}'(\lambda_0)|\leq Cs_0\sup_{\Hcal_{s_0}}\{|v| + |\del v|\}$, \\ 3. When $3/5\leq r/t<1$, $|V_{t,x}(\lambda_0)| + |V_{t,x}'(\lambda_0)|=0$. Furthermore, $$ \aligned |V_{t,x}(\lambda)| + |V_{t,x}'(\lambda)|\leq& (\lambda + 1)|v(\lambda t/s,\lambda x/s)| + \lambda(s/t)|\del_t v(\lambda t/s,\lambda x/s)| \\ &+ C(s/t)^{-1}\lambda |\dels v(\lambda t/s,\lambda x/s)| \\ \leq& C(\lambda + 1)|v|_{\varphi_{t,x}(\lambda)} + C\lambda(s/t)|\del v|_{\varphi_{t,x}(\lambda)} + C|L v|_{\varphi_{t,x}(\lambda)}, \endaligned $$ $$ |\delu_a\delu_b v|\leq Ct^{-2}|v|_{2,2}, $$ $$ |\Lcal \omega| = |(s/t)\del_t \omega + (x^a/s)\delu_a\omega| \leq C(s/t)|\del \omega| + C(s/t)^{-1}|\dels \omega|. $$ Then \eqref{eq9-08-10-2020} is written as $$ \aligned |& sv(t,x)| +|v(t,x) + s\big((s/t)\del_tv + (x^a/s)\delu_a v\big)| \\ \leq& C\int_{\lambda_0}^s\lambda(|f| + \lambda^{-2}|v|_{2,2})\big|_{\varphi_{t,x}(\lambda)}d\lambda \\ &+ C\int_{\lambda_0}^s\lambda \big(|v| + (s/t)|\del v| + \lambda^{-1}|v|_{1,1}\big)\big((s/t)|\del \omega| + (s/t)^{-1}|\dels\omega|_{1,1}\big)\big|_{\varphi_{t,x}(\lambda)}d\lambda \\ &+ \left\{ \begin{aligned} &Cs_0\sup_{\Hcal_{s_0}} \{|v| + |\del v|\},\quad &&0\leq r/t\leq 3/5, \\ &0,\quad &&3/5\leq r/t<1. \end{aligned} \right. \endaligned $$ which concludes the desired result. \end{proof} \section{Reformulation of the systems}\label{sec-reformulation} This section together with the following one are devoted to the model systems \eqref{eq1-main}. In this section we construct the auxiliary system \eqref{eq1-05-09-2020}. \subsection{Construction of the auxiliary systems} We get stated with \eqref{eq1a-main}. For any $C^3$ solution $(u,v)$ to \eqref{eq1a-main}, suppose that $w$ is a solution to the following wave equation: $$ \Box w = v^2. $$ Then $$ \Box \big(A^{\alpha}\del_{\alpha}w\big) = A^{\alpha}\del_{\alpha}(v^2), $$ which shows that $A^{\alpha}\del_{\alpha}w$ and $u$ satisfy the same wave equation. Based on this observation, we make the following reformulation. Let $(u,v)$ be a $C^3$ solution to the Cauchy problem of \eqref{eq1-main} with the initial data \begin{equation}\label{eq3-03-09-2020} u(2,x) = u_0(x), \quad \del_t u(2,x) = u_1(x),\quad v(2,x) = v_0(x),\quad \del_tv(2,x) = v_1. \end{equation} Then for the following auxiliary Cauchy problem \begin{equation}\label{eq4-03-09-2020} \left\{ \aligned &\Box w = v^2, \\ &\Box w_0 = 0, \\ &\Box \tilde{v} + c^2\tilde{v} = B^{\alpha}\tilde{v}\del_{\alpha}(w_0 + A^{\beta}\del_{\beta}w). \endaligned \right. \end{equation} with \begin{equation}\label{eq5-03-09-2020} \aligned &w(2,x) = \del_tw(2,x) = 0,\quad \tilde{v}(2,x) = v_0(x),\quad \del_t \tilde{v}(2,x) = v_1(x) \\ &w_0(2,x) = u_0(x), \quad \del_tw_0(2,x) = u_1(x) - A^0v_0^2(x), \endaligned \end{equation} we can establish the following result: \begin{lemma}\label{lem1-05-10-2020} Let $(u,v)$ be the $C^3$ solution to the Cauchy problem associate to \eqref{eq1-main} with initial data \eqref{eq3-03-09-2020} and $(w,w_0,\tilde{v})$ be the $C^2$ solution to \eqref{eq3-03-09-2020} with initial data \eqref{eq5-03-09-2020}. Then \begin{equation}\label{eq1-08-09-2020} u = w_0 + A^{\alpha}\del_{\alpha}w,\quad v = \tilde{v} \end{equation} when both solutions exist. Furthermore, when $(w,w_0,\tilde{v})$ exists, $(u,v)$ defined through \eqref{eq1-08-09-2020} is the solution to \eqref{eq1a-main} with \eqref{eq3-03-09-2020}. \end{lemma} \begin{proof} This is an argument based on the uniqueness of \eqref{eq1-main}. In fact we calculate \begin{equation}\label{eq8-03-09-2020} \Box \big(w_0 + A^{\alpha}\del_{\alpha}u\big) = \Box w_0 + A^{\alpha}\del_{\alpha}\Box w = A^{\alpha}\del_{\alpha}(v^2) \end{equation} and on the initial slice, \begin{equation}\label{eq7-03-09-2020} w_0(2,x) + A^{\alpha}\del_{\alpha}w(2,x) = u_0(x). \end{equation} On the other hand, $$ \del_t(w_0 + A^{\alpha}\del_{\alpha}w) = \del_tw_0 + A^0\del_t\del_tw + A^a\del_t\del_aw. $$ On $\{t=2\}$, recall that $\del_tw = \del_aw = 0$, $$ \del_t(w_0 + A^{\alpha}\del_{\alpha}w)(2,x) = \del_tw_0(2,x) + A^0\del_t\del_t w(2,x) = u_1(x) - A^0v_0^2(x) + A^0\del_t\del_t w(2,x). $$ Furthermore, remark that: $$ \Box w = v^2\ \Rightarrow\ \del_t\del_t w(2,x) = v^2(2,x) + \sum_a\del_a\del_au(2,x) = v_0^2(x). $$ Substitute this into the last expression, we obtain: \begin{equation}\label{eq6-03-09-2020} \del_t(w_0 + A^{\alpha}\del_{\alpha}w)(2,x) = u_1(x). \end{equation} Let $\tilde{u} = w_0+A^{\alpha}\del_{\alpha}w$. Consider \eqref{eq8-03-09-2020}, \eqref{eq7-03-09-2020} and \eqref{eq6-03-09-2020}, $(\tilde{u}, \tilde{v})$ satisfies the following Cauchy Problem $$ \left\{ \aligned &\Box \tilde{u} = A^{\alpha}\del_{\alpha}(v^2) \\ &\Box \tilde{v} + c^2\tilde{v} = B^{\alpha}\tilde{v}\del_{\alpha}\tilde{u} \endaligned \right. $$ with initial data $$ \tilde{u}(2,x) = u_0(x), \quad \del_t \tilde{u}(2,x) = u_1(x),\quad \tilde{v}(2,x) = v_0(x),\quad \del_t\tilde{v}(2,x) = v_1. $$ Then by uniqueness theory on \eqref{eq1a-main}, the desired result is obtained. \end{proof} Similar to \eqref{eq1a-main}, \eqref{eq1b-main} can be reformulated as following. We consider \begin{equation}\label{eq1-05-09-2020} \left\{ \aligned &\Box w = v^2, \\ &\Box w_0 = 0, \\ &\Box \tilde{v} + c^2\tilde{v} = B\tilde{v}(w_0 + A^{\alpha\beta}\del_{\alpha}\del_{\beta}w) \endaligned \right. \end{equation} with initial data constructed as following: \begin{equation}\label{eq2-05-09-2020} \aligned &w(2,x) = \del_tw(2,x) = 0,\quad \tilde{v}(2,x) = v_0(x),\quad \del_t\tilde{v}(2,x) = v_1(x) \\ &w_0(2,x) = u_0(x),\quad \del_tw_0(2,x) = u_1(x) - 2A^{00}v_0(x)v_1(x). \endaligned \end{equation} Then similar to the previous result, one has \begin{lemma}\label{lem1-06-10-2020} Let $(u,v)$ be a $C^4$ solution to \eqref{eq1b-main} with the following initial data \begin{equation}\label{eq3-08-09-2020} u(2,x) = u_0(x),\quad \del_t u(2,x) = u_1(x),\quad v(2,x) = v_0(x),\quad \del_t v(2,x) = v_1(x). \end{equation} Suppose that $(w,w_0,\tilde{v})$ is the $C^2$ solution to the Cauchy problem of \eqref{eq1-05-09-2020} with initial data \eqref{eq2-05-09-2020}. Then \begin{equation}\label{eq2-08-09-2020} u = w_0 + A^{\alpha\beta}\del_{\alpha}\del_{\beta}w,\quad v = \tilde{v} \end{equation} when both $(u,v)$ and $(w,w_0,\tilde{v})$ exist. Furthermore, when $(w,w_0,\tilde{v})$ exists, $(u,v)$ defined via \eqref{eq2-08-09-2020} is the solution to \eqref{eq1b-main} with \eqref{eq3-08-09-2020}. \end{lemma} \begin{proof} The proof is quite similar. We only need to worry about the local uniqueness of the system \eqref{eq1b-main}. This is can be regarded as an application of Theorem 2.2 in Section 1.2 of \cite{Sogge-2008-book}. \end{proof} In order to treat \eqref{eq4-03-09-2020} and \eqref{eq1-05-09-2020} simultaneously, we consider a more general system \begin{equation}\label{eq-main} \left\{ \aligned &\Box w = v^2, \\ &\Box w_0 = 0, \\ &\Box v + c^2v = B^{\alpha}v\del_{\alpha}w_0 + Kvw_0 + vA^{\alpha\beta}\del_{\alpha}\del_\beta w. \endaligned \right. \end{equation} \subsection{Statement of the main result on auxiliary system}\label{subsec-auxi-structure} As explained in Introduction, we will firstly establish global stability results on \eqref{eq-main}. \begin{theorem}\label{thm-auxiliary} Consider the Cauchy problem associated to \eqref{eq-main} with initial data posed on $\{t=2\}$ and compactly supported in $\{|x|<1\}$: \begin{align*} &w(2,x) = \del_tw(2,x) = 0,\quad v(2,x) = v_0(x),\quad \del_tv(2,x) = v_1(x), \\ &w_0(2,x) = u_0(x),\quad \del_tw_0(2,x) = u_1(x). \end{align*} Then there exists a integer $N\geq 9$ and a positive constant $\vep_0>0$ determined by the system, such that for all $0\leq \vep\leq \vep_0$, if \begin{equation} \|u_0\|_{H^{N+1}} + \|v_0\|_{H^{N+1}} + \|u_1\|_{L^2(H^N)} + \|v_1\|_{H^N}\leq \vep, \end{equation} then the local-in-time solution of \eqref{eq-main} associated with such initial data extends to time infinity. \end{theorem} Based on the above result together with Lemma \ref{lem1-05-10-2020}, Lemma \ref{lem1-06-10-2020}, we conclude Theorem \ref{thm-main}. \begin{remark} For the Cauchy problem associated to \eqref{eq-main}, one can also consider a initial data with non-zero $w(2,x)$, $\del_tw(2,x)$ and the global stability result still holds. \end{remark} \subsection{Structure of the auxiliary system} \eqref{eq-main} is still a strong coupled W-KG system. However, it enjoys a special structure called the Hessian structure. That is, omit for a moment the linear component $w_0$, the wave component $w$ is only coupled in Hessian form in right-hand-side of the system and especially, the gradient $\del w$ does not appear. As explained in Introduction, the better decay and energy bounds of Hessian form permits us to establish integrabel $L^2$ bound on $v\del\del w$. If we omit $w_0$, then this kind of system has already been handled in \cite{Stingo-2018}, \cite{Dong-2020-2} and \cite{M-2020-strong}. Following the perspective of \cite{M-2020-strong}, we aquire that this system is {\sl subcritical} in the sense of principle decay. However, the presence of $w_0$ brings supplementary terms $vw_0,v\del w_0$ which are not completely trivial. Given that the decay of $|w_0|$ and $|\del w_0|$ are $s^{-1}$ which seems impossible to be improved, $vw_0$ and $v\del w_0$ will lead at least a logarithmic loss on the energy bound of Klein-Gordon component. This prevents one from expecting uniform energy bounds on Klein-Gordon component for lower (even for zero) order. Without this important uniform bound, one can no longer obtain sharp decay $v\sim t^{-1}$ via Klainerman-Sobolev inequality, which is crucial in the bootstrap argument. To overpass this difficulty, we rely on Proposition \ref{lem1'-01-09-2020}. This $L^{\infty}-L^{\infty}$ estimate is originally introduced in \cite{Kl2} and applied in many other context, see for example \cite{Dfx}, \cite{KS-2011}, \cite{M1} etc. Here we present a version with non-constant-coefficient Klein-Gordon potential. This permits us to establish the following decay $$ |\del v|\simeq (s/t)^2s^{-1} $$ without uniform energy bounds. Similarly, this lack of uniform energy bound on Klein-Gordon component also brings inconvenience when we try to obtain sharp decay on $\del\del w$ because of the term $v^2$ coupled in the equation of $w$. This relies on a $L^{\infty}-L^{\infty}$ estimate on wave equation based on integration along hyperbolas which is Proposition \ref{prpo2 wave-sharp} established in \cite{M-2020-strong}. \section{Proof of Theorem \ref{thm-auxiliary}}\label{sec-bootstrap} \subsection{Energy and decay bounds on $w_0$} Remark that $w_0$ is a solution to a free-linear wave equation with sufficiently regular and compactly supported initial data. Then it has conserved standard and conformal energy: \begin{equation} \Ecal_0^N(s,w_0)^{1/2} + \Ecal_2^N(s,w_0)^{1/2}\leq C_0\vep \end{equation} where $C_0$ is a constant determined by $N$. Standard energy bounds lead to the following decay: \begin{equation}\label{eq6-08-10-2020} |\del w_0|_{N-2}\leq CC_0\vep s^{-1},\quad |\dels w_0|_{N-2}\leq CC_0\vep t^{-1}. \end{equation} Remark that in this case, recalling \eqref{eq1-09-10-2020}, \begin{equation}\label{eq8-08-10-2020} \Fcal_2^N(2;s,w_0)\leq CC_1\vep\ln(s)\leq CC_1\vep s^{\delta}. \end{equation} Then by \eqref{eq2-10-06-2020}, \begin{equation}\label{eq8-08-09-2020} |\del w_0|_{N-2}\leq CC_0\vep (s/t)^{-1}s^{-2}\ln(s)\leq CC_1\vep (s/t)^{-1}s^{-2+\delta}. \end{equation} Furthermore, by Lemma \ref{lem1-13-06-2020} \begin{equation}\label{eq3-01-09-2020} |w_0|_{N-2}\leq CC_1\vep s^{-1} \end{equation} which leads to \begin{equation}\label{eq9-08-09-2020} |\dels w_0|_{N-3}\leq CC_0\vep (s/t)s^{-2}. \end{equation} \subsection{Bootstrap assumption and direct bounds} Remark that the initial data are posed on $\{t=1\}$ and supported in $\{|x|<1\}$. The property of finite speed propagation says that the local solution is supported in $\Kcal = \{r<t-1\}$. Furthermore, taking $\vep$ sufficiently small such that (thanks to the local theory on wave system) the local solution extends beyond $t=5/2$ and remark that $\Hcal_2\cap\Kcal\subset \{2\leq t\leq 5/2\}$, one can take the restriction of the local solution on $\Hcal_2$ as the initial data on $\Hcal_2$. Again, due to the local theory, the energy on $\Hcal_2$ is bounded by the initial energy on $\{t=2\}$. So for sufficiently small $\vep$ (determined by the system and $N$), there is a constant $C_0$ (also determined by the system and $N$) such that $$ \max\Big\{\sum_{\alpha=0}^3\Ecal_0^N(2,\del_{\alpha} w)^{1/2}, \Ecal_0^N(2,w)^{1/2}, \Ecal_{0,1}^N(2,v)^{1/2}\Big\} = C_0\vep . $$ Then make the following bootstrap assumption on a hyperbolic time interval $[2,s_1]$: \begin{equation}\label{eq8-15-08-2020} \max\Big\{\sum_{\alpha}\Ecal_0^N(s,\del_{\alpha} w)^{1/2}, \Ecal_0^N(s,w)^{1/2}, \Ecal_{0,1}^N(s,v)^{1/2}\Big\}\leq C_1\vep s^{\delta} \end{equation} with $C_1> C_0, \delta\leq 1/20$. \begin{remark} The restriction on $N$ can be improved. However here we simply take $N\geq 9$ because when considering $|AB|_p, p\leq N$, we want $$ |AB|_p\leq C|A|_p|B|_{N-5} + C|A|_{N-5}|B|_p. $$ \end{remark} By Klainerman-Sobolev type inequality, \begin{equation}\label{eq1-08-10-2020} s|\del\del w|_{N-2} + t|\del\dels w|_{N-2}\leq CC_1\vep s^{\delta}, \end{equation} \begin{equation}\label{eq10-15-08-2020} s|\del w|_{N-2} + t|\dels w|_{N-2} \leq CC_1\vep s^{\delta}, \end{equation} \begin{equation}\label{eq11-15-08-2020} s|\del v|_{N-2} + t|\dels v|_{N-2} + t|v|_{N-2}\leq CC_1\vep s^{\delta}. \end{equation} \begin{remark} During the analysis, $C$ denotes a constant determined by $N,\delta$ and system. \end{remark} \subsection{Bounds on Hessian form of $w$.} By Proposition \ref{prop1-14-08-2020}, one can establish the following bounds on Hessian form: \begin{equation}\label{eq12-15-08-2020} \|(s/t)^2s|\del\del w|_{N-1}\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{2\delta}, \end{equation} \begin{equation}\label{eq13-15-08-2020} (s/t)^2|\del\del w|_{N-3}\leq CC_1\vep (s/t)s^{-2+2\delta}. \end{equation} Here we remark that the Hessian form enjoy better principle decay ($-2+2\delta$ order) than the gradient ($-1+\delta$ order). These are based on the following bounds on $|\Box w| = |v^2|$: \begin{equation}\label{eq2-08-10-2020} |\Box w|_{N-2}\leq C(C_1\vep)^2(s/t)^2s^{-2+2\delta}, \end{equation} \begin{equation}\label{eq3-08-10-2020} \|(s/t)^{-1}|\Box w|_N\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{-1+2\delta}. \end{equation} The first is direct via \eqref{eq11-15-08-2020}. For the second, remark that $$ \aligned \|(s/t)^{-1}|v^2|_N\|_{L^2(\Hcal_s)}\leq& CC_1\vep\|(s/t)^{-1}|v|_{N-2}|v|_N\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1+\delta} \||v|_N\|_{L^2(\Hcal_s)}\leq C(C_1\vep)^2s^{-1+2\delta}. \endaligned $$ Then apply Proposition \ref{prop1-14-08-2020} together with \eqref{eq2-08-10-2020} and \eqref{eq3-08-10-2020}, \eqref{eq12-15-08-2020} and \eqref{eq13-15-08-2020} are proved. \subsection{Conical decay of Klein-Gordon component}\label{subsec-model-conical} In this subsection we establish the following two bounds: \begin{equation}\label{eq16-15-08-2020} |v|_{N-3}\leq CC_1\vep(s/t)^2s^{-1+\delta}. \end{equation} \begin{equation}\label{eq17-15-08-2020} \|(s/t)^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{\delta}. \end{equation} These are done by applying Proposition \ref{prop1-fast-kg}. We first prove that \begin{equation}\label{eq4-08-10-2020} |\Box v + c^2 v|_{N-3}\leq CC_1\vep |v|_{N-3}. \end{equation} This is by checking each term in right-hand-side of the Klein-Gordon equation of \eqref{eq-main}. In fact by \eqref{eq3-01-09-2020} and \eqref{eq6-08-10-2020}, $$ |vw_0|_{N-3} + |v\del w_0|_{N-3}\leq C\big(|w_0|_{N-3} + |\del w_0|_{N-3}\big)|v|_{N-3}\leq CC_1\vep|v|_{N-3}. $$ Finally, by \eqref{eq1-08-10-2020} $$ |v\del\del w|_{N-3}\leq C|\del\del w|_{N-3}|v|_{N-3}\leq CC_1\vep |v|_{N-3}. $$ So we conclude by \eqref{eq4-08-10-2020}. Then substitute \eqref{eq4-08-10-2020} into \eqref{eq2 prop fast-KG}, $$ c^2|v|_{N-3}\leq C(s/t)^2|\del v|_{N-2} + C|\Box v + c^2 v|_{N-3}\leq CC_1\vep(s/t)^2s^{-1+\delta} + CC_1\vep|v|_{N-3}. $$ Taking $C_1\vep$ sufficiently small such that \begin{equation}\label{eq7-08-10-2020} |CC_1\vep|\leq \frac{c^2}{2}, \end{equation} we obtain \eqref{eq16-15-08-2020}. Then we turn to $L^2$ bound. We establish the following bound on source term: \begin{equation}\label{eq5-08-10-2020} \|(s/t)^{-1}|\Box v + c^2 v|_{N-1}\|_{L^2(\Hcal_s)}\leq C(C_1\vep)^2s^{\delta} + CC_1\vep\|(s/t)^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)} . \end{equation} This is also by checking each term. $$ \begin{aligned} &\|(s/t)^{-1}|vw_0|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& C\|(s/t)^{-1}|w_0|_{N-2}|v|_{N-1}\|_{L^2(\Hcal_s)} \\ &+ C\|(s/t)^{-1}|v|_{N-3}|w_0|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep\|(s/t)^{-1}s^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)} + CC_1\vep s^{-1+\delta}\|(s/t)|w_0|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep\|(s/t)^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)} + C(C_1\vep)^2 s^{-1+\delta}\ln (s) \end{aligned} $$ where \eqref{eq3-01-09-2020}, \eqref{eq16-15-08-2020} and \eqref{eq2-10-06-2020} (combined with \eqref{eq8-08-10-2020}) are applied. Similarly, $$ \aligned &\|(s/t)^{-1}|v\del w_0|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& C\|(s/t)^{-1}|v|_{N-1}|\del w_0|_{N-2}\|_{L^2(\Hcal_s)} + \|(s/t)^{-1}|v|_{N-3}|\del w_0|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep\|(s/t)^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)} + CC_1\vep s^{-1+\delta}\|(s/t)|\del w_0|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep\|(s/t)^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)} + C(C_1\vep)^2s^{-1+\delta}. \endaligned $$ $$ \aligned &\|(s/t)^{-1}|v\del\del w|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& C\|(s/t)^{-1}|v|_{N-1}|\del\del w|_{N-2}\|_{L^2(\Hcal_s)} \\ &+ C\|(s/t)^{-1}|v|_{N-3}|\del\del w|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep \|(s/t)^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)} + CC_1\vep s^{-1+\delta}\|(s/t)|\del\del w|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep \|(s/t)^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)} + C(C_1\vep)^2 s^{-1+2\delta}. \endaligned $$ Then we conclude by \eqref{eq5-08-10-2020}. Now taking $C_1\vep$ sufficiently small, simlar to \eqref{eq16-15-08-2020}, \eqref{eq17-15-08-2020} is established. \subsection{Improved energy bounds for lower order: Klein-Gordon component} \label{subsec-model-KG-lower} This subsection is dedicated to \begin{equation}\label{eq6-01-09-2020} \Ecal_{0,c}^{N-2}(s,v)^{1/2}\leq \big(C_0\vep + C(C_1\vep)^2\big)(s/2)^{CC_1\vep}. \end{equation} Let us firstly establish the following bounds on source terms. \begin{equation}\label{eq4-01-09-2020} \||v\del\del w|_{N-1}\|_{L^2(\Hcal_s)}\leq C(C_1\vep)^2s^{-2+3\delta}, \end{equation} \begin{equation}\label{eq5-01-09-2020} \||vw_0|_{N-2}\|_{L^2(\Hcal_s)} + \||v\del w_0|_{N-2}\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{-1}\Ecal_{0,c}^{N-2}(s,v)^{1/2}. \end{equation} The first is due to \eqref{eq12-15-08-2020}, \eqref{eq13-15-08-2020}, \eqref{eq16-15-08-2020} and \eqref{eq17-15-08-2020} : $$ \aligned \||v\del\del w|_{N-1}\|_{L^2(\Hcal_s)}\leq& C\||v|_{N-3}|\del\del w|_{N-1}\|_{L^2(\Hcal_s)} + C\||v|_{N-1}|\del\del w|_{N-3}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-2+\delta}\|(s/t)^2s|\del\del w|_{N-1}\|_{L^2(\Hcal_s)} \\ &+ CC_1\vep s^{-2+2\delta}\|(s/t)^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& C(C_1\vep)^2s^{-2+3\delta}. \endaligned $$ \eqref{eq5-01-09-2020} is directly by \eqref{eq3-01-09-2020} and \eqref{eq6-08-10-2020}. Then we conclude that, by energy estimate Proposition \ref{prop 1 energy} $$ \Ecal_{0,c}^{N-2}(s,v)^{1/2}\leq \Ecal_{0,c}^{N-2}(2,v)^{1/2} + C(C_1\vep)^2 + CC_1\vep\int_2^s\tau^{-1}\Ecal_{0,c}^{N-2}(\tau,v)^{1/2}d\tau. $$ Then by Gronwall's inequality, \eqref{eq6-01-09-2020} is concluded. A direct result of \eqref{eq6-01-09-2020} is the following bounds (thanks to Klainerman-Sobolev inequality and the fact that $C_0\leq C_1$, $C_1\vep \leq 1$) \begin{equation}\label{eq10-01-09-2020} s|\del v|_{N-4} + t|v|_{N-4}\leq CC_1\vep s^{CC_1\vep}. \end{equation} \subsection{Sharp decay bounds.}\label{section sharp decay bounds} This subsection is dedicated to the following sharp decay bounds. \begin{equation}\label{eq12-01-09-2020} |v|_{N-5}\leq CC_1\vep (s/t)^2s^{-1+CC_1\vep}, \end{equation} \begin{equation}\label{eq18-01-09-2020} |\del \del w|_{N-5}\leq CC_1\vep s^{-1 + CC_1\vep}, \end{equation} \begin{equation}\label{eq11-01-09-2020} (s/t)|\del v| + |v| \leq CC_1\vep (s/t)^2s^{-1}, \end{equation} \begin{equation}\label{eq13-01-09-2020} |\del\del w|\leq CC_1\vep s^{-1}. \end{equation} \paragraph*{Proof of \eqref{eq12-01-09-2020}.} This is the most easy one. It is based on \eqref{eq10-01-09-2020} and parallel to \eqref{eq16-15-08-2020}. In fact we recall \eqref{eq2 prop fast-KG} and \eqref{eq4-08-10-2020}, $$ \aligned c^2|v|_{N-5}\leq& CC_1\vep (s/t)^2|\del v|_{N-4} + C|\Box v + c^2v|_{N-5} \\ \leq& CC_1\vep (s/t)^2|\del v|_{N-4} + CC_1\vep |v|_{N-5}. \endaligned $$ Then by \eqref{eq10-01-09-2020} and \eqref{eq7-08-10-2020}, i.e. with $C_1\vep$ sufficiently small, \eqref{eq12-01-09-2020} is established. \paragraph*{Proof of \eqref{eq18-01-09-2020}.} This is a direct consequence of \eqref{eq10-01-09-2020}. We apply Proposition \ref{prpo2 wave-sharp} applied on the equation satisfied by $\del_{\alpha}u$: \begin{equation}\label{eq6-08-09-2020} \Box \del_{\alpha}\del^IL^J w = \del_{\alpha}\del^IL^J(v^2). \end{equation} Following the notation of Proposition \ref{prpo2 wave-sharp}, one has $$ \Delta^w[\del_{\alpha}\del^IL^J w] = t^{1/2}(t-r)^{1/2}\frac{t^2}{t^2+r^2}\sum_a\delu_a\delu_a\del_{\alpha}\del^IL^J w $$ and thus for $|I|+|J|\leq N-4$ \footnote{Here we have applied $|\dels\dels u|_{p,k}\leq Ct^{-2}|u|_{p+2,k+2}$. This can be observed by homogeneity. A proof can be found in \cite{M-2020-strong}.}, $$ |\Delta^w[\del_{\alpha}\del^IL^J w]|\leq Cs|\dels\dels\del_{\alpha} w|_{N-4}\leq C(s/t)^2s^{-1}|\del w|_{N-2} $$ which leads to, thanks to \eqref{eq10-15-08-2020} \begin{equation}\label{eq4-08-09-2020} |\Delta^w[\del_{\alpha}\del^IL^J w]|\leq CC_1\vep (s/t)^2s^{-2+\delta} \leq CC_1\vep t^{-2+\delta}. \end{equation} Remark that it is integrable with respect to $t$. On the other hand, recall the definition, for $|I|+|J|\leq N-4$, $$ |S^w[\del_{\alpha} \del^IL^J w]| \leq Cs|v\del v|_{N-4}. $$ By \eqref{eq10-01-09-2020}, \begin{equation}\label{eq5-08-09-2020} |S^w[\del_{\alpha} \del^IL^J w]|\leq C(C_1\vep)^2t^{-1}s^{CC_1\vep}\leq C(C_1\vep)^2t^{-1+CC_1\vep}. \end{equation} Now recall \eqref{eq1-29-05-2020}, for $|I|+|J|\le N-4$ $$ \aligned |s\del_t\del_{\alpha}\del^IL^J w(t,x)|\leq& CC_0\vep + CC_1\vep\int_2^t\tau^{-2+\delta}d\tau + C(C_1\vep)^2\int_2^t\tau^{-1+CC_1\vep}d\tau \\ \leq& CC_1\vep t^{CC_1\vep}\leq CC_1\vep s^{CC_1\vep} . \endaligned $$ This concludes \eqref{eq18-01-09-2020}. \paragraph*{Proof of \eqref{eq11-01-09-2020}.} This is based on Proposition \ref{lem1'-01-09-2020}. We write the Klein-Gordon equation of \eqref{eq-main} into the form of \eqref{eq'14-01-09-2020}: \begin{equation}\label{eq7-08-09-2020} \Box v + c^2\Big(1 - \underbrace{c^{-2}\big(B^{\alpha}\del_{\alpha}w_0 + Kw_0 + A^{\alpha\beta}\del_{\alpha}\del_\beta w\big)}_{\omega}\Big) v = 0. \end{equation} Following the notation of Proposition \ref{lem1'-01-09-2020}, $f = 0$ and $\omega$ is defined by the above expression. We remark that $$ \aligned &|\del \omega|\leq C\big(|\del w_0| + |\del\del w_0| + |\del^3 w|\big), \\ &|\dels \omega|\leq C\big(|\dels w_0| + |\del\dels w_0| + |\del\del\dels w|\big). \endaligned $$ By \eqref{eq8-08-09-2020}, \eqref{eq9-08-09-2020}, \eqref{eq13-15-08-2020} and the following observation: \begin{equation}\label{eq1-09-09-2020} |\del\dels w|_{N-3}\leq Ct^{-1}|\del w|_{N-2}\leq CC_1\vep (s/t)s^{-2+\delta}, \end{equation} one has \begin{equation}\label{eq2-09-09-2020} |\del \omega|\leq CC_1\vep (s/t)^{-1}s^{-2 + 2\delta},\quad |\dels \omega| \leq CC_1\vep (s/t)s^{-2+\delta}. \end{equation} The key is that both are integrable with respect to $s$ if we omit the conical decay. Then following the notation of Proposition \ref{lem1'-01-09-2020}, when $0\leq r/t\leq 3/5$ $$ s|v|(t,x) + s|((s/t)\del_t + (x^a/s)\delu_a) v|(t,x)\leq CC_0\vep + CC_1\vep\int_2^s \lambda^{-2+3\delta}d\lambda \leq CC_1\vep. $$ When $3/5\leq r<1$, we need to apply \eqref{eq16-15-08-2020}, $$ \aligned &s|v|(t,x) + s|((s/t)\del_t + (x^a/s)\delu_a) v|(t,x) \\ \leq& CC_1\vep (s/t)^2\int_{\lambda_0}^s\lambda^{-2+\delta}d\lambda + C(C_1\vep)^2(s/t)^{-1}\int_{\lambda_0}^s(s/t)^2\lambda^{\delta}\ (s/t)\lambda^{-2+2\delta}d\lambda \\ \leq& CC_1\vep (s/t)^2\lambda_0^{-1+2\delta}. \endaligned $$ Remark that $\lambda_0\simeq (s/t)^{-1}$, we obtain: $$ |v|(t,x) + |((s/t)\del_t + (x^a/s)\delu_a) v|(t,x) \leq CC_1\vep (s/t)^{3-2\delta}s^{-1} $$ which gives the bound on $|v|$. Furthermore, $$ (s/t)|\del_t v|(t,x)\leq CC_1\vep(s/t)^{3-2\delta}s^{-1} + CC_1\vep(s/t)^{-1}t^{-1}|v|_{1,1}\leq CC_1\vep (s/t)^2s^{-1} $$ which show the bound on $|\del_t v|$. Recall that $$ |\del_a v| = |t^{-1}L_av - (x^a/t)\del_t v|\leq Ct^{-1}|v|_{1,1} + |\del_tv|\leq CC_1\vep (s/t)s^{-1}. $$ This leads to \eqref{eq11-01-09-2020}. \paragraph*{Proof of \eqref{eq13-01-09-2020}.} This is the most critical one. We rely on Proposition \ref{prpo2 wave-sharp}. Remark that \begin{equation}\label{eq3-09-09-2020} \Box \del_{\alpha} w = 2v\del_{\alpha}v. \end{equation} Then \eqref{eq4-08-09-2020} is still applicable. Furthermore, based on \eqref{eq11-01-09-2020}, \begin{equation}\label{eq4-09-09-2020} |S^w[\del_{\alpha} w]|\leq C(C_1\vep)^2 (s/t)^3s^{-1} = C (C_1\vep)^2 (s/t)^2 t^{-1}. \end{equation} Here the conical factor $(s/t)^2$ in the above bound is crucial. This bound permits us to compare $S^w[\del_{\alpha}w]$ with $P(t,r)$ in Proposition \ref{prpo2 wave-sharp} and prevents a logarithmic loss. Substitute \eqref{eq4-08-09-2020} and \eqref{eq4-09-09-2020} into \eqref{eq1-29-05-2020} and consider a point $(\bar{t},\bar{x})\in \Hcal_{\bar{s}}$, $$ \aligned &\bar{s}|\del_t\del_{\alpha} w|(\bar{t},\bar{x}) \\ \leq& CC_0\vep + C(C_1\vep)^2\int_2^{\bar{t}} (s/t)^2 t^{-1}\big|_{\gamma(t;\bar{t},\bar{x})} e^{-\int_{t}^{\bar{t}}P_{\bar{t},\bar{x}}(\eta)d\eta} + CC_1\vep \int_2^{\bar{t}} t^{-2+\delta}dt \\ \leq& CC_1\vep + C(C_1\vep)^2\int_2^{\bar{t}}(s/t)^2t^{-1}\big|_{\gamma(t;\bar{t},\bar{x})}e^{-\int_{t}^{\bar{t}}(s/t)^2t^{-1}\big|_{\gamma(\eta;\bar{t},\bar{x})}d\eta}dt \\ \leq& CC_1\vep. \endaligned $$ For the rest components of Hessian form, say $|\del_a\del_\alpha u|$, we have $$ \aligned s|\del_a\del_{\alpha} w|(t,x)&=s |(\delu_a-(x^a/t)\del_t)\del_\alpha w| \leq s|\delu_a\del_\alpha w|+s|\del_t\del_\alpha w|\\ &\leq s|\delu_a\del_\alpha w|+CC_1\vep \leq (s/t)|L_a\del_\alpha w|+CC_1\vep\\ &\leq CC_1\vep. \endaligned $$ This concludes \eqref{eq13-01-09-2020}. \subsection{Improving the energy bounds} \label{subsec-model-improved} Equipped with sharp bounds \eqref{eq12-01-09-2020} - \eqref{eq13-01-09-2020}, we are ready to improve \eqref{eq8-15-08-2020}. \paragraph*{Energy estimate on wave component.} We remark that \begin{equation}\label{eq1-11-09-2020} \aligned \||v^2|_p\|_{L^2(\Hcal_s)} + \||v\del v|_p\|_{L^2(\Hcal_s)}\leq& CC_1\vep s^{-1}\Ecal_{0,c}^p(s,v)^{1/2} \\ &+ CC_1\vep s^{-1+CC_1\vep}\Ecal_{0,c}^{p-1}(s,v)^{1/2}. \endaligned \end{equation} This is direct by \eqref{eq10-01-09-2020} and \eqref{eq11-01-09-2020}. $$ \aligned &\||v\del v|_p\|_{L^2(\Hcal_s)} \\ \leq& C\||v||\del v|_p\|_{L^2(\Hcal_s)} + \||\del v||v|_p\|_{L^2(\Hcal_s)} \\ &+ \||v|_{N-4}|\del v|_{p-1}\|_{L^2(\Hcal_s)} + \||v|_{p-1}|\del v|_{N-4}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\|(s/t)|\del v|_p\|_{L^2(\Hcal_s)} + CC_1\vep s^{-1}\||v|_p\|_{L^2(\Hcal_s)} \\ & + CC_1\vep s^{-1+CC_1\vep}\|(s/t)|\del v|_{p-1}\|_{L^2(\Hcal_s)} + CC_1\vep s^{-1+CC_1\vep}\||v|_{p-1}\|_{L^2(\Hcal_s)} \endaligned $$ which leads to the bound on $v\del v$. The bound on $v^2$ is similar. We omit the detail. Then recall Proposition \ref{prop 1 energy}, and apply it on to: $$ \Box \del^IL^J w = \del^IL^J (v^2) $$ and $$ \Box \del^IL^J\del_{\alpha}u = \del^IL^J\del_{\alpha}(v^2) $$ with $|I|+|J|\leq p$. We obtain: $$ \aligned &E_0(s,\del^IL^J w)^{1/2} + \sum_{\alpha} E_0(s,\del^IL^J\del_{\alpha}w)^{1/2} \\ \leq& E_0(2,\del^IL^J w)^{1/2} + \sum_{\alpha} E_0(2,\del^IL^J\del_{\alpha}w)^{1/2} \\ &+ C\int_2^s\|\del^IL^J(v^2)\|_{L^2(\Hcal_\tau)}d\tau + C\sum_{\alpha}\int_2^s\|\del^IL^J\del_{\alpha}(v^2)\|_{L^2(\Hcal_\tau)}d\tau \\ \leq& CC_1\vep s^{-1}\Ecal_0^p(s,v)^{1/2} + CC_1\vep s^{-1+CC_1\vep}\Ecal_0^{p-1}(s,v)^{1/2}. \endaligned $$ So we conclude that \begin{equation}\label{eq2-11-09-2020} \aligned &\Ecal_0^p(s,w)^{1/2} + \sum_{\alpha}\Ecal_0^p(s,\del_{\alpha} w)^{1/2} \\ \leq& \Ecal_0^p(2,w)^{1/2} + \sum_{\alpha}\Ecal_0^p(2,\del_{\alpha} w)^{1/2} + CC_1\vep \int_2^s\tau^{-1}\Ecal_{0,c}^p(\tau, v)^{1/2}d\tau \\ &+ CC_1\vep\int_2^s\tau^{-1+CC_1\vep}\Ecal_{0,c}^{p-1}(\tau,v)^{1/2}d\tau. \endaligned \end{equation} Then, write \eqref{eq6-01-09-2020} into the following form ($C_0\leq C_1$ and $C_1\vep\leq1$): \begin{equation}\label{eq2-09-10-2020} \Ecal_{0,c}^{N-2}(s,v)^{1/2}\leq CC_1\vep s^{CC_1\vep}\leq CC_1\vep s^{C(C_1\vep)^{1/2}} \end{equation} and substitute it into the above expression, we obtain: \begin{equation}\label{eq3-09-10-2020} \Ecal_0^{N-2}(s,w)^{1/2} + \sum_{\alpha}\Ecal_0^p(s,\del_{\alpha} w)^{1/2} \leq C_0\vep + C(C_1\vep)^{3/2}s^{C(C_1\vep)^{1/2}}. \end{equation} \paragraph*{Energy estimate on Klein-Gordon component.} This is also by Proposition \ref{prop 1 energy}. We will establish \begin{equation}\label{eq3-11-09-2020} \aligned \||w_0 v|_p\|_{L^2(\Hcal_s)} +& \||v\del w_0|_p\|_{L^2(\Hcal_s)} + \||v\del\del w|_p\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\Ecal_{0,c}^p(s,v)^{1/2} + CC_1\vep s^{-1}\sum_{\alpha}\Ecal_0^p(s,\del_{\alpha} w)^{1/2} \\ &+ C(C_1\vep)^{5/3}s^{-1+C(C_1\vep)^{1/3}}. \endaligned \end{equation} For the first term, $$ \aligned &\||w_0v|_p\|_{L^2(\Hcal_s)} \\ \leq& C\||w_0|_{N-2}|v|_p\|_{L^2(\Hcal_s)} + C\||v|_{N-4}|w_0|_{p-1}\|_{L^2(\Hcal_s)} + C\||v||w_0|_p\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\||v|_p\|_{L^2(\Hcal_s)} + CC_1\vep s^{-1+CC_1\vep}\|(s/t)|w_0|_{p-1}\|_{L^2(\Hcal_s)} \\ &+ CC_1\vep s^{-1}\|(s/t)|w_0|_p\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\Ecal_{0,c}^p(s,v)^{1/2} + CC_1\vep s^{-1} \Fcal_2^p(2;s,w_0) \\ &+ CC_1\vep s^{-1+CC_1\vep}\Fcal_2^{p-1}(2;s,w_0). \endaligned $$ Here $\|(s/t)|w_0|_p\|_{L^2(\Hcal_s)}$ is bounded by $\Fcal_2^p(2;s, w_0)$. The latter is bounded as (recalling \eqref{eq1-09-10-2020}), $$ \aligned \Fcal_2^p(2;s,w_0)\leq& \|(s/t)|w_0|_p\|_{L^2(\Hcal_s)} + \Ecal_2^p(s,w_0)^{1/2} + \int_2^s\tau^{-1}\Ecal_2^p(\tau,w_0)^{1/2}d\tau \\ \leq& CC_0\vep + CC_1\vep \int_2^s\tau^{-1 + (C_1\vep)^{1/3}} d\tau\leq C(C_1\vep)^{2/3} s^{(C_1\vep)^{1/3}}. \endaligned $$ Then $$ \||w_0v|_p\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{-1}\Ecal_{0,c}^p(s,v)^{1/2} + C(C_1\vep)^{5/3}s^{-1+C(C_1\vep)^{1/3}.} $$ The second term is bounded as following: $$ \aligned \||v\del w_0|_p\|_{L^2(\Hcal_s)}\leq& C\||\del w_0|_{N-3}|v|_p\|_{L^2(\Hcal_s)} + C\||v|_{N-4}|\del w_0|_p\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\Ecal_{0,c}^p(s,v)^{1/2} + CC_1\vep s^{-1+CC_1\vep}\|(s/t)|\del w_0|_p\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\Ecal_{0,c}^p(s,v)^{1/2} + C(C_1\vep)^{5/3}s^{-1+C(C_1\vep)^{1/3}}. \endaligned $$ For the last term, $$ \aligned &\||v\del\del w|_p\|_{L^2(\Hcal_s)} \\ \leq& C\||v||\del\del w|_p\|_{L^2(\Hcal_s)} + C\||\del\del w||v|_p\|_{L^2(\Hcal_s)} \\ & + C\||v|_{N-3}|\del\del w|_{N-1}\|_{L^2(\Hcal_s)} + C\||v|_{N-1}|\del\del w|_{N-3}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1} \|(s/t)|\del\del w|_p\|_{L^2(\Hcal_s)} + CC_1\vep s^{-1}\||v|_p\|_{L^2(\Hcal_s)} \\ & + CC_1\vep s^{-2+\delta}\|(s/t)^2s|\del\del w|_{N-1}\|_{L^2(\Hcal_s)} + CC_1\vep s^{-2+\delta}\|(s/t)^{-1}|v|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\Big(\sum_{\alpha}\Ecal_0^p(s,\del_{\alpha}w)^{1/2} + \Ecal_{0,c}^p(s,v)^{1/2}\Big) + C(C_1\vep)^2s^{-2+3\delta}. \endaligned $$ Here for the second inequality we have applied \eqref{eq11-01-09-2020}, \eqref{eq13-01-09-2020}, \eqref{eq16-15-08-2020} and \eqref{eq13-15-08-2020}. For the last inequality, \eqref{eq12-15-08-2020} and \eqref{eq17-15-08-2020} are applied. Substitute these bounds into \eqref{ineq 3 prop 1 energy}, $$ \aligned \Ecal_{0,c}^p(s,v)^{1/2}\leq&\Ecal_{0,c}^p(2,v)^{1/2} \\ &+ C\int_2^s\big(\||v\del w_0|_p\|_{L^2(\Hcal_\tau)} + \||vw_0|_p\|_{L^2(\Hcal_\tau)} + \||v\del\del w|_p\|_{L^2(\Hcal_\tau)}d\tau \endaligned $$ which leads to \begin{equation}\label{eq4-11-09-2020} \aligned \Ecal_{0,c}^p(s,v)^{1/2}\leq& \Ecal_{0,c}^p(2,v)^{1/2} + C(C_1\vep)^{4/3}s^{C(C_1\vep)^{1/3}} \\ &+ CC_1\vep\int_2^s\tau^{-1}\Big(\sum_{\alpha}\Ecal_0^p(s,\del_{\alpha}w)^{1/2} + \Ecal_{0,c}^p(s,v)^{1/2}\Big)d\tau. \endaligned \end{equation} \paragraph*{Inductional argument.} For convenience, we denote by $$ A^p(s) := \max\Big\{\sum_{\alpha}\Ecal_0^p(s,\del_{\alpha} w)^{1/2}, \Ecal_0^p(s,w)^{1/2}, \Ecal_{0,c}^p(s,v)^{1/2} \Big\}. $$ Then \eqref{eq2-11-09-2020} and \eqref{eq4-11-09-2020} lead to \begin{equation}\label{eq5-11-09-2020} \aligned A^p(s) \leq& A^p(2) + C(C_1\vep)^{4/3} s^{C(C_1\vep)^{1/3}} \\ &+ CC_1\vep \int_2^s\tau^{-1}A^p(\tau)d\tau + CC_1\vep\int_2^s\tau^{-1+CC_1\vep}A^{p-1}(\tau)d\tau \endaligned \end{equation} for $0\leq p\leq N$. Recall that \eqref{eq2-09-10-2020} and \eqref{eq3-09-10-2020} shows that \begin{equation} A^{N-2}(s)\leq CC_1\vep s^{C(C_1\vep)^{1/2}}. \end{equation} Now we concentrate on the case $p=N-1$. $$ \aligned A^{N-1}(s)\leq& C_0\vep + C(C_1\vep)^{4/3}s^{C(C_1\vep)^{1/3}} + C(C_1\vep)^2\int_2^s\tau^{-1}A^{N-1}(\tau)d\tau \\ &+ CC_1\vep\int_2^s\tau^{-1+CC_1\vep}A^{N-2}d\tau \\ \leq& C_0\vep + C(C_1\vep)^{4/3}s^{C(C_1\vep)^{1/3}} + C(C_1\vep)\int_2^s\tau^{-1}A^{N-1}(\tau)d\tau. \endaligned $$ Gronwall's inequality leads to \begin{equation}\label{eq4-09-10-2020} A^{N-1}(s)\leq C_0\vep(s/2)^{CC_1\vep} + C(C_1\vep)^{4/3}s^{C(C_1\vep)^{1/3}}\leq CC_1\vep s^{C(C_1\vep)^{1/3}}. \end{equation} Then taking $p=N$ and the above bound on $A^{N-1}$, do again the above argument and we obtain: \begin{equation} A^N(s)\leq C_0\vep(s/2)^{CC_1\vep} + C(C_1\vep)^{4/3}s^{C(C_1\vep)^{1/3}}. \end{equation} \subsection{Conclusion of bootstrap argument.} Taking \begin{equation}\label{eq12-09-10-2020} C_1>2C_0,\quad \vep\leq \delta^3/(C^3C_1),\quad \vep \leq \frac{(C_1-2C_0)^3}{8C^3C_1^4},\quad \vep\leq \frac{c^2}{2CC_1}, \end{equation} one guarantees the following bounds: $$ C(C_1\vep)\leq \frac{c^2}{2}, \quad C(C_1\vep)^{1/3}\leq \delta, \quad C_0\vep + C(C_1\vep)^{4/3}\leq \frac{1}{2}C_1\vep. $$ So we guarantees \eqref{eq7-08-10-2020} and $$ A^N(s) = \max\Big\{\Ecal_0^N(s,w)^{1/2}, \Ecal_{0,c}^N(s,v)^{1/2}, \sum_{\alpha}\Ecal_0^N(s,\del_{\alpha} w)^{1/2}\Big\}\leq \frac{1}{2}C_1\vep s^{\delta}. $$ This closes the bootstrap argument. \subsection{Application on Klein-Gordon-Zakharov system} Clearly, \eqref{eq1-Zakharov} is in the form of \eqref{eq1b-main}. So Theorem \ref{thm-main} applies directly and we conclude the global existence result for \eqref{eq1-Zakharov} with small localized regular initial data. \section{Return to the totally geodesic wave map system}\label{sec-conclusion-wave-map} \subsection{The stability problem of a type of totally geodesic wave maps}\label{subsec-wave-map} A detailed explanation and formulation can be found in \cite{Ab-2019}. Here we only give an outline. Let $\RR^{2+1}$ be the standard $2+1$ dimensional Minkowski space-time with signature $(-,+,+)$. Let $(M,g)$ be a $n-$dimensional space-form. Consider a map $\RR^{2+1}\stackrel{\phi}{\longrightarrow}M$. This map is called {\sl wave map} if it is a critical point of the following action: \begin{equation} S[\phi] = \int_{\RR^{2+1}}\langle d\phi, d\phi \rangle_{T^*\RR^{2+1}\otimes \phi^{-1}TM}\, d\text{vol}_{\RR^{2+1}}. \end{equation} Then $\phi$ satisfies the following Euler-Lagrangian equation: \begin{equation}\label{eq1-04-10-2020} \Box_m \phi^i + \Gamma_{jk}^i(\phi) \m^{\mu\nu}\del_{\mu}\phi^j\del_{\nu}\phi^k = 0 \end{equation} where $\m$ is the Minkowski metric defined on $\RR^{2+1}$. $\Box_\m = m^{\alpha\beta}\del_{\alpha}\del_{\beta} = -\del_t^2 + \Delta_{\RR^2}$. $\Gamma_{ij}^k(\phi)$ are the Christoffel symbols of $(M,g)$ evaluated along the image of $\phi$. We consider a wave map from $\RR^{2+1}$ to $(M,g)$ with the following factorization: $$ \varphi:\RR^{2+1}\stackrel{\varphi_S}{\longrightarrow}\RR\stackrel{\varphi_I}{\longrightarrow} M. $$ Here $\varphi_S$ is a semi-Riemannian submersion to either $(\RR,e)$ or $(\RR,-e)$ and $\varphi_I$ is a immersion from $(\RR,e)$ to $(M,g)$. By \cite{ES64} and \cite{Vil70}, the above factorization implies that $\varphi = \varphi_I\circ\varphi_S$ is totally geodesic. Then one consider the stability problem of $\varphi$. Furthermore, regarding \cite{Vil70}, $\varphi_S$ is prescribed to be a linear function $\RR^{2+1}\rightarrow \RR$ with $m(d\varphi_S,d\varphi_S) = \pm 1$. Then $\varphi_I$ is a immersed geodesic on $(M,g)$. The quantitative formulation and analysis of this problem is based on the geodesic normal coordinates. It permits one to parameterize a tubular neighborhood of an arbitrary geodesic, in which the Christoffel symbols vanish along the geodesic. Let us give a brief description. Let $(M,g)$ be a complete Riemannian manifold and $\gamma:\RR\rightarrow M$ be a fixed geodesic. We parameterize it with arc-length. At $\gamma(0)$, let $\vec{e}_1 = \dot\gamma(0)$ and $$ e^{\perp} := (\vec{e}_2,\cdots, \vec{e}_n),\quad \vec{e}_i\perp \vec{e}_j,\quad |\vec{e}_i|=1. $$ For $x^1\in \RR$, define $\vec{e}_i$ by parallel transporting along $\gamma$. This forms an normalized orthogonal frame along $\gamma$. Let $\exp_{\gamma(x^1)}(t\vec{v})$ be the geodesic satisfying $$ \frac{d}{dt}\exp_{\gamma(x^1)}(t\vec{v})\big|_{t=0} = \vec{v},\quad \exp_{\gamma(x^1)}(t\vec{v})\big|_{t=0} = \gamma(x^1). $$ with $\vec{v}\in \dot\gamma(x^1)^{\perp}$ and for $(x^1,\bar{x}) = (x^1,x^2,\cdots x^n)$ with $|\bar{x}|$ sufficiently small, $$ \sigma: (x^1,\bar{x})\rightarrow \exp_{\gamma(x^1)}\Big(\sum_{j=2}^nx^j\vec{e}_j\Big) $$ gives a parameterization of the tubular neighborhood of $\gamma$. This is called the geodesic normal coordinates. Due to the assumption that $(M,g)$ being a space-form, this coordinate system is well defined in $(-\infty,\infty)\times \{|\bar{x}|< \delta\}$ with a fixed $\delta>0$ (which is the {\sl focal radius}). With this geodesic normal coordinates, a perturbation of $\varphi$ is described as following (see in detail \cite{Ab-2019}). We construct the above geodesic normal coordinates in a tubular neighborhood of $\varphi_I(\RR)$. Then $\varphi$ is written as $$ \RR^{2+1}\stackrel{\varphi_S}{\longrightarrow}\RR\stackrel{\varphi_I}{\longrightarrow} M,\quad (t,x)\rightarrow \sigma\big(\varphi_S(t,x),0\big). $$ Then we perturb $\varphi$ as following, consider $$ \tilde{\varphi} : \RR^{2+1}\rightarrow M,\quad (t,x)\rightarrow \sigma\big(\varphi_S(t,x) + \phi^1(t,x),\phi^k(t,x)\big), \quad k=2,\cdots, n $$ and demand that $\tilde{\varphi}$ is again a wave map. This leads to, thanks to \eqref{eq1-04-10-2020} and the fact that $\varphi_S$ being linear, \begin{equation}\label{eq15-07-10-2020} \begin{aligned} &\Box_m \phi^1 + \Gamma_{jk}^1(\varphi_S+\phi^1,\phi^k)\cdot \m(\bar{\phi}^j,\bar{\phi}^k) = 0, \\ &\Box_m \phi^i + \Gamma_{jk}^i(\varphi_S+\phi^1,\phi^k)\cdot \m(\bar{\phi}^j,\bar{\phi}^k) = 0, \quad i = 2,\cdots, n \end{aligned} \end{equation} \footnote{Here $\Box_m := m^{\alpha\beta}\nabla_{\alpha}\nabla_{\beta} = -\del_t^2-\sum_a\del_a^2$.} with $\bar{\phi}^1 = \varphi_S(t,x) + \phi^1, \bar{\phi}^k = \phi^k, k=2,3,\cdots n$. Then one develops the nonlinear terms $\Gamma_{jk}^1(\varphi_S+\phi^1,\phi^k)\cdot \m(\bar{\phi}^j,\bar{\phi}^k)$ into Taylor series at each point of $\varphi_I(\RR)$, i.e., at $(\phi^1,\phi^k) = 0$. Since (due to the construction of the geodesic normal coordinates) $\Gamma_{jk}^i\equiv 0$ along $\varphi_I(\RR)$, \begin{equation}\label{eq16-07-10-2020} \del_1^q\Gamma_{jk}^i(\varphi_S,0) = 0,\quad q=0,1,2,\cdots \end{equation} This leads to the fact that in the development of $\Gamma$ there is no monominal containing $\phi^1$. Furthermore, $$ \aligned \m(d\bar{\phi}^j,d\bar{\phi}^k) =&\m(d\varphi_S,d\varphi_S)\delta_1^i\delta_1^j + \m(d\varphi_S,d\phi^k)\delta_1^i + \m(d\phi^j,d\varphi_S)\delta_1^k + \m(d\phi^j,d\phi^k) \\ =& \pm \delta_1^i\delta_1^j + \m(d\varphi_S,d\phi^k)\delta_1^i + \m(d\phi^j,d\varphi_S)\delta_1^k + \m(d\phi^j,d\phi^k). \endaligned $$ Then following the procedure in Section 3 of \cite{Ab-2019}, $(\phi^1,\phi^k)$ satisfies the following system \footnote{Recall that we have taken $\Box = \del_t^2 - \sum_a\del_a^2$.} when $(M,g)$ is of sectional curvature $\equiv -1$: \begin{subequations}\label{eq2-04-10-2020} \begin{equation}\label{eq2a-04-10-2020} \aligned -\Box \phi^1 = & -2\sum_{k=2}^n\phi^k\del_t\phi^k - 2\sum_{k,l=2}^n\del_k\del_l\Gamma_{j1}^1(\varphi_S,0)\phi^k\phi^l\cdot \m(d \phi^j,d\varphi_S) \\ & + \sum_{k=2}^n\del_k\Gamma_{jl}^1(\varphi_S,0)\phi^k\cdot \m(d\phi^j,d\phi^l) + \sum_{k,j,l=2}^n\del_k\del_j\del_l\Gamma_{11}^1(\varphi_S,0)\phi^k\phi^j\phi^l \\ & +\text{ h.o.t.} \\ -\Box \phi^k - \phi^k =& 2\phi^k\del_t \phi^1 - 2\sum_{k,l=2}^n\del_k\del_l\Gamma_{j1}^i(\varphi_S,0)\phi^k\phi^l\cdot \m(d\phi^j,d\varphi_S) \\ & + \sum_{k=2}^n\del_k\Gamma_{jl}^1(\varphi_S,0)\phi^k\cdot \m(d\phi^j,d\phi^l) - \sum_{k,j,l=2}^n\del_k\del_j\del_l \Gamma_{11}^i(\varphi_S,0)\phi^k\phi^j\phi^l \\ &+ \text{h.o.t.} \endaligned \end{equation} And when $(M,g)$ is of sectional curvature $\equiv 1$, \begin{equation}\label{eq2b-04-10-2020} \aligned -\Box \phi^1 = & 2\sum_{k=2}^n\phi^k\del_1\phi^k - 2\sum_{k,l=2}^n\del_k\del_l\Gamma_{j1}^1(\varphi_S,0)\phi^k\phi^l\cdot \m(d \phi^j,d\varphi_S) \\ & + \sum_{k=2}^n\del_k\Gamma_{jl}^1(\varphi_S,0)\phi^k\cdot \m(d\phi^j,d\phi^l) - \sum_{k,j,l=2}^n\del_k\del_j\del_l\Gamma_{11}^1(\varphi_S,0)\phi^k\phi^j\phi^l \\ &+ \text{h.o.t.} \\ -\Box \phi^k - \phi^k =& -2\phi^k\del_1 \phit^1 - 2\sum_{k,l=2}^n\del_k\del_l\Gamma_{j1}^i(\varphi_S,0)\phi^k\phi^l\cdot \m(d\phi^j,d\varphi_S) \\ & + \sum_{k=2}^n\del_k\Gamma_{jl}^1(\varphi_S,0)\phi^k\cdot \m(d\phi^j,d\phi^l) - \sum_{k,j,l=2}^n\del_k\del_j\del_l \Gamma_{11}^i(\varphi_S,0)\phi^k\phi^j\phi^l \\ &+ \text{h.o.t.} \endaligned \end{equation} \end{subequations} We summarize the key structures of the above two systems. Firstly, in both cases, the quadratic terms coupled in wave equation are in divergence form. Secondly, as consequence of Lemma 2.5 of \cite{Ab-2019}, the coefficients $\del^k\Gamma(\varphi_S,0)$ can be regarded as universal constants. Remark that in order to guarantee global existence in $\RR^{2+1}$, we must also analyze the cubic terms (this is explained in \cite{A2} in pure wave case). In \eqref{eq2-04-10-2020} we are sufficiently lucky such that in both cases and both wave and Klein-Gordon equations, the cubic terms are either null cubic forms or containing at least two Klein-Gordon factors. Finally, the higher order terms can be written as linear combinations of \begin{equation}\label{eq18-07-10-2020} \aligned &\phi^j\phi^k \m(d\phi^a,d\phi^b)\cdot O(\phi),\quad \phi^j\phi^k\phi^l\del\phi^c\cdot O(\phi), \\ &1\leq a,b,c \leq n,\quad 2\leq j,k,l \leq n \endaligned \end{equation} with coefficients $\del^q\Gamma(\varphi_S,0)$ which can be regarded as universal constants due the the Lemma 2.5 of \cite{Ab-2019} and the remark made below equation (5.2) therein. The important structure is the two Klein-Gordon factors. This is due to \eqref{eq16-07-10-2020}. \subsection{Formulation of the auxiliary system and statement of main result} This subsection is devoted to the construction of the auxiliary system to \eqref{eq2-04-10-2020}. We will only regard the case of negative sectional curvature. The positive case is similar, we omit the detail. Firstly, we write \eqref{eq2a-04-10-2020} into the following form: \begin{equation}\label{eq1-02-09-2020} \aligned &\Box \phi = -2\sum_{k=2}^n\phi^k \del_t\phi^k + S_W[\phi], \\ &\Box \phi^k + \phi^k = 2\phi^k \del_t\phi^1 + S_{GK}^{k}[\phi] , \quad 2\leq k\leq n, \\ &\phi^1(2,x) = \phi^1_0(x),\quad \del_t \phi^1(2,x) = \phi^1_1(x), \\ &\phi^k(2,x) = \phi^k_0(x),\quad \del_t \phi^k(2,x) = \phi^k_1(x),\quad 2\leq k\leq n. \endaligned \end{equation} Here $S_W$ and $S^k_{KG}$ contains the third and higher order terms. By introducing the shifted primitive of $\phi^1$ up to second order: \begin{equation}\label{eq2-05-10-2020} \phi^1 = \del_t w + w_0, \end{equation} we arrive at the following auxiliary system: \begin{equation}\label{eq4-02-09-2020} \left\{ \aligned &\Box w = - \sum_{k=2}^n|\tilde{\phi}^k|^2, \\ &\Box w_0 = S_W[(\del_tw + w_0),\tilde{\phi}^k], \\ &\Box \tilde{\phi}^k + \tilde{\phi}^k = 2\tilde{\phi}^k\del_t\big(\del_t w + w_0\big) + S_{KG}^{k}[(\del_t w + w_0),\tilde{\phi}^k],\quad 2\leq k\leq n \endaligned \right. \end{equation} with initial data \begin{equation}\label{eq5-02-09-2020} \aligned &w (2,x) = 0,\quad \del_t w(2,x) = 0,\quad \tilde{\phi}^k(2,x) = \phi^k_0(x),\quad \del_t\tilde{\phi}^k(2,x) = \phi^k_1(x) \\ & w_0(2,x) = u_0(x), \quad \del_t w_0(2,x) = u_1(x) + \sum_{k=2}^n|\tilde{\phi}^k_0(x)|^2. \endaligned \end{equation} Parallel to Lemma \ref{lem1-05-10-2020}, the following result holds: \begin{lemma}\label{lem2-05-10-2020} Let $(w,w_0,\tilde{\phi}^k)$ be a $C^3$ solution to \eqref{eq4-02-09-2020}, then $(\phi^1,\phi^k)$ with $\phi^1$ defined by \eqref{eq2-05-10-2020} and $\phi^k = \phit^k$ is the $C^2$ solution to \eqref{eq1-02-09-2020}. \end{lemma} \begin{remark}\label{rk-1-12-10-2020} Compare \eqref{eq4-02-09-2020} with \eqref{eq-main}, the main difference is that $w_0$ is no longer a solution to free linear wave equation. However it is not so far from that because the right-hand-side of equation of $w_0$ in \eqref{eq4-02-09-2020} is {\sl cubic}. Another important difference is that in \eqref{eq4-02-09-2020}, $w_0$ is coupled only with its gradient. More precisely, the term $vw_0$ does not exits in Klein-Gordon equations. Although it is not necessary, this structure will simplify a lot our argument. For example we need to the bound the conformal energy of $w_0$, which was necessary in Section \ref{sec-bootstrap} in order to bound the $L^2$ and pointwise bounds of $|w_0|_{p}$. \end{remark} Then we establish the following result: \begin{theorem}\label{thm-wave-map} Suppose that $\phi^j_i$, $i=1,2$ and $j=1,\cdots, n$ are compactly supported in $\{|x|>1\}$. Then there is a integer $N\geq 7$ and positive constant $\vep_0>0$ determined by the system and $N$, such that $\forall\, 0\leq \vep\leq \vep_0$, if \begin{equation}\label{eq3-05-10-2020} \|\phi^j_0\|_{H^{N+1}}\leq \vep,\quad \|\phi_1^j\|_{H^N}\leq \vep,\quad j=1,2,\cdots n, \end{equation} then the local solution to the Cauchy problem associated with \eqref{eq1-02-09-2020} together with the initial data \eqref{eq3-05-10-2020} extends to time infinity. \end{theorem} \begin{remark} This regularity $N\geq 7$ is to guarantee that for $A,B$ sufficiently regular functions, $$ |AB|_p\leq |A|_p|B|_{N-4} + |A|_{N-4}|B|_p. $$ \end{remark} \begin{remark} The restriction $N\geq 7$ is not optimal. As we will see in the proof, because the auxiliary system is {\bf subcritical} in the sens of principle decay, this regularity can probably be improved. However in the regime of Lemma \ref{lem2-05-10-2020} there is a limit. Regarding Lemma \ref{lem2-05-10-2020} and the auxiliary system \eqref{eq4-02-09-2020}, we need to guarantee the $C^3$ regularity of $w$ and $C^2$ regularity of $(w_0,\phit^k)$. So we need $H^4$ regularity on $\phi_1^j$ and $H^5$ regularity on $\phi^j_0$. \end{remark} \section{Proof of Theorem \ref{thm-wave-map}} \label{sec-wave-maps-proof} \subsection{Bootstrap assumption and direct bounds} We establish this global stability result via \eqref{eq4-02-09-2020}. This is quite similar to the Proof in Section \ref{sec-bootstrap}. In fact there is a one-to-one correspondence between the subsections here to those in Section \ref{sec-bootstrap}, except the Subsection \ref{subsec-high-order} in which we treat the high-order terms. There are also other modifications among which the most important is the bound on $w_0$. In this case one only demands a uniform bound on standard energy but not on conformal energy. The reason is explained in Remark \ref{rk-1-12-10-2020}. To get started, let $$ \max\Big\{\sum_{\alpha}\Ecal_0^N(2,\del_{\alpha}w)^{1/2}, \Ecal_0^N(2,w)^{1/2},\sum_{k=2}^n\Ecal_{0,1}^N(2,\phit^k)^{1/2}, \Ecal_0^N(2,w_0)^{1/2}\Big\}= C_0\vep. $$ Then we make the following bootstrap bound on $[2,s_1]$: \begin{equation}\label{eq4-05-10-2020} \max\Big\{\sum_{\alpha}\Ecal_0^N(s,\del_{\alpha}w)^{1/2}, \Ecal_0^N(s,w)^{1/2},\sum_{k=2}^n\Ecal_{0,1}^N(s,\phit^k)^{1/2}\Big\}\leq C_1\vep s^{\delta}. \end{equation} Suppose furthermore that \begin{equation}\label{eq6-05-10-2020} \Ecal_0^N(s,w_0)^{1/2}\leq C_1\vep . \end{equation} Here $1/100\leq \delta\leq 1/20$. We will prove the following {\sl improved energy bounds} on the same time interval: \begin{equation}\label{eq7-05-10-2020} \max\Big\{\sum_a\Ecal_0^N(s,\del_{\alpha}w)^{1/2}, \Ecal_0^N(s,w)^{1/2},\sum_{k=2}^n\Ecal_{0,1}^N(s,\phit^k)^{1/2}\Big\}\leq \frac{1}{2}C_1\vep s^{\delta}. \end{equation} \begin{equation}\label{eq8-05-10-2020} \Ecal_0^N(s,w_0)^{1/2}\leq \frac{1}{2}C_1\vep. \end{equation} By \eqref{eq3-10-06-2020}, the following decay are guaranteed by \eqref{eq4-05-10-2020}: \begin{equation}\label{eq2-06-10-2020} s|\del\del w|_{N-2} + t|\del\dels w|_{N-2}\leq CC_1\vep s^{\delta}, \end{equation} \begin{equation}\label{eq5-05-10-2020} s|\del w|_{N-2} + t|\dels w|_{N-2}\leq CC_1\vep s^{\delta} \end{equation} and \begin{equation}\label{eq9-05-10-2020} s|\del \phit^k|_{N-2} + t|\dels \phit^k|_{N-2} + t|\phit^k|_{N-2}\leq CC_1\vep s^{\delta} \end{equation} which leads to \begin{equation}\label{eq10-05-10-2020} t|\del \phit^k|_{N-3} + t^2|\dels \phit^k|_{N-3}\leq CC_1\vep s^{\delta}. \end{equation} By \eqref{eq7-10-06-2020} combined with \eqref{eq4-05-10-2020}, \begin{equation}\label{eq8-09-10-2020} st|\del\dels w|_{N-3} + t^2|\dels\dels w|_{N-3}\leq CC_1\vep s^{\delta}. \end{equation} This leads to the following bound. For $|I|+|J|\leq N-3$, $$ |\del_r\delu_a\del^IL^J w|\leq CC_1\vep (s/t)s^{-2+\delta} \leq CC_1\vep (t-r)^{-1/2+\delta/2}t^{-3/2+\delta/2}. $$ For a fixed $(t,x)\in \Hcal_{[2,s_1]}$, integrate this inequality along the segment $\{(t,\lambda x/|x|),|x|\leq \lambda\leq t-1\}$ and remark that $\delu_a\del^IL^J$ vanishes on $\del\Kcal = \{r=t-1\}$, we obtain: $$ \aligned |\delu_a\del^IL^J w(t,x)|\leq& \int_{|x|}^{t-1}|\del_r\delu_a\del^IL^J u|(t,\lambda x/|x|)d\lambda \\ \leq& CC_1\vep (t-r)^{1/2+\delta/2}t^{-3/2+\delta/2} \leq CC_1\vep (s/t)^2s^{-1+\delta}. \endaligned $$ This leads to \begin{equation}\label{eq4-07-10-2020} |\dels w|_{N-3}\leq CC_1\vep (s/t)^2s^{-1+\delta}. \end{equation} Furthermore, by \eqref{eq6-05-10-2020} and \eqref{eq4-10-06-2020} \begin{equation}\label{eq1-07-10-2020} s|\del w_0|_{N-2} + t|\dels w_0|_{N-2} \leq CC_1\vep. \end{equation} Similar to \eqref{eq4-07-10-2020}, the following bound holds for $w_0$ \begin{equation}\label{eq10-08-10-2020} |\dels w_0|_{N-3}\leq CC_1\vep(s/t)^2s^{-1}. \end{equation} Furthermore \begin{equation}\label{eq10-09-10-2020} |\del\dels w_0|_{N-3}\leq CC_1\vep (s/t)s^{-2}. \end{equation} Recall \eqref{eq2-05-10-2020}, the following bounds hold: \begin{equation}\label{eq9-07-10-2020} \|(s/t)|\del \phi^1|_N\|_{L^2(\Hcal_s)} + \||\dels \phi^1|_N\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{\delta}. \end{equation} \begin{equation}\label{eq2-07-10-2020} |\del \phi^1|_{N-2}\leq CC_1\vep s^{-1+\delta}, \end{equation} \begin{equation}\label{eq3-07-10-2020} |\dels \phi^1|_{N-2}\leq CC_1\vep (s/t)s^{-1+\delta}. \end{equation} Combining\eqref{eq8-09-10-2020} and \eqref{eq10-08-10-2020}, we obtain \begin{equation}\label{eq11-08-10-2020} |\dels \phi^1|_{N-3}\leq CC_1\vep (s/t)s^{-2+\delta} + CC_1\vep (s/t)^2s^{-1} \leq CC_1\vep(s/t)^2s^{-1+\delta}. \end{equation} \subsection{Bounds on Hessian form of $w$} In this subsection we establish the following bound: \begin{equation}\label{eq5-07-10-2020} (s/t)^2|\del\del w|_{N-3}\leq CC_1\vep (s/t)s^{-2+2\delta}, \end{equation} \begin{equation}\label{eq6-07-10-2020} \|(s/t)^2s|\del\del w|_{N-1}\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{2\delta}. \end{equation} These are exactly the same to \eqref{eq12-15-08-2020} and \eqref{eq13-15-08-2020}. We establish bounds parallel to \eqref{eq2-08-10-2020} and \eqref{eq3-08-10-2020}. To do so, remark that $$ \aligned &|(\phit^k)^2|_{N-2}\leq C(C_1\vep)^2(s/t)^2s^{-2+2\delta}, \\ &\|(s/t)^{-1}|(\phit^k)^2|_N\|_{L^2(\Hcal_s)}\leq C(C_1\vep)^2s^{-1+2\delta} \endaligned $$ where \eqref{eq9-05-10-2020} is applied. Recall the relation \eqref{eq2-05-10-2020}, a direct consequence of \eqref{eq5-07-10-2020} is \begin{equation}\label{eq7-07-10-2020} |\del\phi^1|_{N-3}\leq CC_1\vep(s/t)^{-1}s^{-2+2\delta} + CC_1\vep s^{-1}. \end{equation} \subsection{Bounds on $S_W[\phi]$ and $S_{KG}^k[\phi]$ and bounds on $w_0$} \label{subsec-high-order} This subsection is devoted to the high-order terms. We establish their bounds and the give two direct bounds on $w_0$. \paragraph*{$L^2$ Bounds on higher-order terms.} We firstly establish the following $L^2$ bounds: \begin{equation}\label{eq1-06-10-2020} \|(s/t)^{-1}|S_W[\phi^1,\phit^k]|_N\|_{L^2(\Hcal_s)} + \|(s/t)^{-1}|S_{KG}^k[\phi^1,\phit^k]|_N\|_{L^2(\Hcal_s)} \leq C(C_1\vep)^3s^{-2+3\delta}. \end{equation} $S_W, S_{KG}^k$ vanish in third order with respect to their argument. Recall Lemma 2.5 of \cite{Ab-2019} and the remark made after (5.2) therein, the coefficients $\del\Gamma^k(\varphi_S,0)$ can be regarded as universal constants. Remark that in $S_W$ and $S_{KG}^k$, the cubic terms are linear combinations of \begin{equation}\label{eq10-07-10-2020} \aligned &\phit^k \m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phi^1,\quad \phit^k \m^{\alpha\beta} \del_{\alpha}\phi^1\del_{\beta}\phit^j,\quad \phit^k \del\phit^j\del \phit^l, \\ &\phit^k\phit^j\del\phi^1,\quad \phit^k\phit^j\del \phit^l, \\ &\phit^j\phit^k\phit^l \endaligned \end{equation} where $\m$ is the Minkowski metric. So the first two terms enjoy a null structure. The rest contains at least two Klein-Gordon factors. We make the following estimates. First, by \eqref{eq13-10-06-2020} and the null condition of $\m^{\alpha\beta}, $ \begin{equation}\label{eq14-07-10-2020} \aligned |\m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phi^1|_{N-3} \leq& C(s/t)^2|\del \phi^1|_{N-3}^2 + C|\dels \phi^1|_{N-3}|\del \phi^1|_{N-3} \\ \leq& C(C_1\vep)^2(s/t)^2s^{-2+2\delta} \endaligned \end{equation} where \eqref{eq2-07-10-2020} and \eqref{eq11-08-10-2020} are applied. $$ \aligned &\||\m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phi^1|_N\|_{L^2(\Hcal_s)} \\ \leq& C\|(s/t)^2|\del \phi^1|_{N-3}|\del\phi^1|_N\|_{L^2(\Hcal_s)} \\ &+ C\||\dels \phi^1|_{N-3}|\del \phi^1|_{N}\|_{L^2(\Hcal_s)} + C\||\dels \phi^1|_{N}|\del \phi^1|_{N-3}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1+\delta}\|(s/t)^2|\del \phi^1|_N\|_{L^2(\Hcal_s)} \\ &+ CC_1\vep s^{-1+\delta}\|(s/t)^2|\del \phi^1|_N\|_{L^2(\Hcal_s)} + CC_1\vep s^{-1+\delta}\||\dels \phi^1|_N\|_{L^2(\Hcal_s)} \endaligned $$ where \eqref{eq2-07-10-2020} and \eqref{eq11-08-10-2020} are applied. Then we conclude that \begin{equation}\label{eq11-07-10-2020} \||\m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phi^1|\|_{L^2(\Hcal_s)}\leq C(C_1\vep)^2s^{-1+2\delta}. \end{equation} Similarly, \begin{equation}\label{eq12-07-10-2020} |\m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phit^j|_{N-3}\leq C(C_1\vep)^2 (s/t)^2s^{-2+2\delta}, \end{equation} \begin{equation}\label{eq13-07-10-2020} \||\m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phit^j|_N\|_{L^2(\Hcal_s)}\leq C(C_1\vep)^2s^{-1+2\delta}. \end{equation} Then the first term in \eqref{eq10-07-10-2020} is bounded as following: \begin{equation}\label{eq19-07-10-2020} \aligned &\|(s/t)^{-1}|\phit^k \m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phi^1|_N\|_{L^2(\Hcal_s)} \\ \leq& \|(s/t)^{-1}|\phit^k|_{N-2}|\m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phi^1|_N\|_{L^2(\Hcal_s)} \\ &+ \|(s/t)^{-1}|\phit^k|_N|\m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phi^1|_{N-3}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1+\delta}\||\m^{\alpha\beta}\del_{\alpha}\phi^1\del_{\beta}\phi^1|_N\|_{L^2(\Hcal_S)} + C(C_1\vep)^2 s^{-2+2\delta}\||\phit^k|_N\|_{L^2(\Hcal_s)} \\ \leq& C(C_1\vep)^3 s^{-2+3\delta} \endaligned \end{equation} where \eqref{eq14-07-10-2020} and \eqref{eq11-07-10-2020} are applied. The second term in \eqref{eq10-07-10-2020} is bounded similarly with \eqref{eq12-07-10-2020} and \eqref{eq13-07-10-2020}: \begin{equation}\label{eq20-07-10-2020} \aligned \|(s/t)^{-1}|\phit^k\m^{\alpha\beta}\del_{\alpha}\phit^j\del_{\beta}\phi^1|_N\|_{L^2(\Hcal_s)} \leq&C(C_1\vep)^3s^{-2+3\delta}. \endaligned \end{equation} The rest terms in \eqref{eq10-07-10-2020} contain at least two Klein-Gordon factors, which permits us to obtain sufficient $L^2$ bounds. We only write the bound on $\phit^k\phit^j\del\phi^1$ (which is the most critical one) and omit the rests. \begin{equation}\label{eq17-07-10-2020} \aligned &\|(s/t)^{-1}|\phit^k\phit^j\del\phit^1|_N \|_{L^2(\Hcal_s)} \\ \leq& \|(s/t)^{-1}|\phit^k|_{N-2}|\phit^j|_{N-2}|\del\phi^1|_N\|_{L^2(\Hcal_s)} \\ &+ \|(s/t)^{-1}|\phit^k|_{N-2}|\phit^j|_N|\del\phi^1|_{N-2}\|_{L^2(\Hcal_s)} \\ &+ \|(s/t)^{-1}|\phit^k|_N|\phit^j|_{N-2}|\del\phi^1|_{N-2}\|_{L^2(\Hcal_s)} \\ \leq& C(C_1\vep)^2s^{-2+2\delta}\|(s/t)|\del\phi^1|_N\|_{L^2(\Hcal_s)} +C(C_1\vep)^2s^{-2+2\delta}\||\phit^j|_N\|_{L^2(\Hcal_s)} \\ &+C(C_1\vep)^2s^{-2+2\delta}\||\phit^j|_N\|_{L^2(\Hcal_s)} \\ \leq& C(C_1\vep)^3 s^{-2+3\delta}. \endaligned \end{equation} For forth and higher order terms, recall \eqref{eq16-07-10-2020}. There is at least two Klein-Gordon factor. So they are bounded similar to \eqref{eq17-07-10-2020}, we omit the detail. Then, summarize \eqref{eq19-07-10-2020}, \eqref{eq20-07-10-2020}, \eqref{eq17-07-10-2020} and the above discussion, \eqref{eq1-06-10-2020} is concluded. \paragraph*{Pointwise bounds on higher-order terms.} We establish the following bounds: \begin{equation}\label{eq12-08-10-2020} |S_W[\phi^1,\phit^k]|_{N-3} + |S_{KG}^k[\phi^1,\phit^k]|_{N-3}\leq C(C_1\vep)^3(s/t)^2s^{-3+3\delta}. \end{equation} This also relies on \eqref{eq10-07-10-2020}. The first two null cubic forms are bounded via \eqref{eq14-07-10-2020} and \eqref{eq12-07-10-2020} combined with \eqref{eq9-05-10-2020}. The rest terms together with forth and higher-order terms, containing at least two Klein-Gordon factor (among these the worst is $\phit^k\phit^j\del\phi^1$), are bounded directly by applying \eqref{eq9-05-10-2020}, \eqref{eq2-07-10-2020} and \eqref{eq11-08-10-2020}. \paragraph*{Improving the energy bounds on $w_0$.} We apply directly Proposition \ref{prop 1 energy} on \begin{equation}\label{eq5-18-10-2020} \Box \del^IL^J w_0 = \del^IL^J(S_W[\phi^1,\phit^k]) \end{equation} for $|I|+|J|\leq N$. By \eqref{ineq 3 prop 1 energy}, we obtain, thanks to \eqref{eq1-06-10-2020}, $$ E_0(2,\del^IL^J w_0)^{1/2}\leq E_0(s,\del^IL^J w_0)^{1/2} + C(C_1\vep)^3\int_2^s\tau^{-2+3\delta}d\tau $$ which leads to, \begin{equation} E_0(2,\del^IL^J w_0)^{1/2}\leq E_0(s,\del^IL^J w_0)^{1/2} + C(C_1\vep)^3. \end{equation} Then we conclude that \begin{equation}\label{eq26-07-10-2020} \Ecal_0(s,w_0)^{1/2}\leq C_0\vep + C(C_1\vep)^3. \end{equation} \paragraph*{Bounds on Hessian forms of $w_0$.} Similar to the component $w$, we will establish: \begin{equation}\label{eq5-09-10-2020} (s/t)^2|\del\del w_0|_{N-3}\leq CC_1\vep (s/t)s^{-2}. \end{equation} This is also relied on Proposition \ref{prop1-14-08-2020}. Recall \eqref{eq12-08-10-2020} and \eqref{eq2 lem Hessian-flat-zero}, one has $$ \aligned (s/t)^2|\del\del w_0|_{N-3}\leq& Ct^{-1}|\del w_0|_{N-2} + C|\Box w_0|_{N-3} \\ \leq& CC_1\vep(s/t)s^{-2} + C(C_1\vep)^3 (s/t)^2s^{-3+3\delta}. \endaligned $$ which leads to \eqref{eq5-09-10-2020}. \paragraph*{Conformal energy bound on $w_0$} \begin{equation}\label{eq2-18-10-2020} \Ecal_2^N(s,w_0)^{1/2}\leq CC_0\vep + C(C_1\vep)^3s^{3\delta}. \end{equation} We only need to apply Proposition \ref{prop-conformal} on \eqref{eq5-18-10-2020} for $|I|+|J|\leq N$. Recall \eqref{eq1-06-10-2020}, \eqref{eq2-18-10-2020} is concluded. Recalling \eqref{eq2-10-06-2020} and \eqref{eq4-10-06-2020}, we obtain the following bounds: \begin{equation}\label{eq3-18-10-2020} \|(s/t)^2s|\del w_0|_N\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{3\delta}, \end{equation} \begin{equation}\label{eq4-18-10-2020} |\del u|_{N-2}\leq CC_1\vep (s/t)^{-1}s^{-2+3\delta}. \end{equation} \subsection{Conical decay of Klein-Gordon component} Parallel to \eqref{eq16-15-08-2020} and \eqref{eq17-15-08-2020}, we establish the following two bounds on $\phit^k$: \begin{equation}\label{eq25-07-10-2020} |\phit^k|_{N-3}\leq CC_1\vep (s/t)^2s^{-1+\delta}, \end{equation} \begin{equation}\label{eq13-08-10-2020} \|(s/t)^{-1}|\phit^k|_{N-1}\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{\delta}. \end{equation} To do se we apply Proposition \ref{prop1-fast-kg}. Then we need to bound the right-hand-side of the equation of $\phit^k$. The higher-order terms are bounded by \eqref{eq12-08-10-2020} and \eqref{eq1-06-10-2020}. The quadratic terms are bounded exactly as in Subsection \ref{subsec-model-conical}, because $(\phi^1,\phit^k)$ and $(w,v)$ satisfies the same bounds respectively. Then we conclude that \begin{equation}\label{eq17-08-10-2020} |\Box \phit^k + c^2\phit^k|_{N-3}\leq CC_1\vep |\phit^k|_{N-3} + C(C_1\vep)^3(s/t)^2s^{-3+3\delta}, \end{equation} \begin{equation}\label{eq18-08-10-2020} \|(s/t)^{-1}|\Box \phit^k + c^2\phit^k|_{N-1}\|_{L^2(\Hcal_s)}\leq C(C_1\vep)^2s^{\delta} + CC_1\vep\|(s/t)^{-1}|\phit^k|_{N-1}\|_{L^2(\Hcal_s)}. \end{equation} Then following the argument in Subsection \ref{subsec-model-conical}, \eqref{eq25-07-10-2020} and \eqref{eq13-08-10-2020} are established. Here we also need the smallness condition on $C_1\vep$ as \eqref{eq7-08-10-2020}: \begin{equation}\label{eq20-09-10-2020} CC_1\vep\leq \frac{c^2}{2}. \end{equation} \subsection{Lower order energy bounds on Klein-Gordon components} This is parallel to Subsection \ref{subsec-model-KG-lower}. We establish \begin{equation}\label{eq14-08-10-2020} \sum_{k=2}^n\Ecal_{0,1}^{N-1}(s,\phit^k)^{1/2}\leq C_0\vep + C(C_1\vep)^2. \end{equation} The higher-order terms $S_{KG}^k[\phi]$ are bounded by \eqref{eq1-06-10-2020}. Furthermore, $$ \aligned &\||\phit^j\del w_0|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-2+3\delta}\|(s/t)^{-1}|\phit^j|_{N-1}\|_{L^2(\Hcal_s)} + CC_1\vep s^{-2+2\delta}\|(s/t)^2s|\del w_0|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& C(C_1\vep)^2 s^{-2+4\delta} \endaligned $$ where \eqref{eq4-18-10-2020}, \eqref{eq25-07-10-2020} are applied for the first inequality and \eqref{eq3-18-10-2020}, \eqref{eq13-08-10-2020} are applied for the second. $$ \aligned &\||\phit^j\del\del w|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq& C\||\phit^j|_{N-1}|\del\del w|_{N-3}\|_{L^2(\Hcal_s)} + C\||\del\del w|_{N-1}|\phit^j|_{N-3}\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-2+2\delta}\|(s/t)^{-1}|\phit^j|_{N-1}\|_{L^2(\Hcal_s)} + CC_1\vep s^{-2+\delta}\|(s/t)^2s|\del\del w|_{N-1}\|_{L^2(\Hcal_s)} \\ \leq&C(C_1\vep)^2 s^{-2+3\delta}. \endaligned $$ Here we have applied \eqref{eq5-07-10-2020} and \eqref{eq25-07-10-2020} for the second inequality, and \eqref{eq13-08-10-2020}, \eqref{eq6-07-10-2020} for the third inequality. These bounds are integrable, so we conclude by \eqref{eq14-08-10-2020}. A direct result of \eqref{eq14-08-10-2020} is the following sharp bound in $\phit$: \begin{equation}\label{eq16-08-10-2020} s|\del \phit^k|_{N-3} + t|\phit^k|_{N-3} \leq CC_1\vep. \end{equation} \subsection{Sharp decay bounds} Now we are ready to establish the following sharp bounds: \begin{equation}\label{eq15b-08-10-2020} |\del\del w|_{N-4}\leq CC_1\vep s^{-1}. \end{equation} The proof is quite similar to that of \eqref{eq18-01-09-2020}. We remark that following the notation in Proposition \ref{prpo2 wave-sharp}, for $|I|+|J|\leq N-4$, $$ |S^w[\del_{\alpha}\del^IL^J u]|\leq CC_1\vep st^{-2}|\del w|_{N-2} $$ which leads to \begin{equation}\label{eq8-18-10-2020} |\Delta^w[\del_{\alpha}\del^IL^J u]|\leq CC_1\vep t^{-2+\delta}. \end{equation} Furthermore, for $S^w[\del_{\alpha}\del^IL^J w]$, we need the following bound on $|\phit^k|$: \begin{equation}\label{eq6-18-10-2020} |\phit^k|_{N-4} \leq CC_1\vep (s/t)^2s^{-1}. \end{equation} This is proved as following. Recall Proposition \ref{prop1-fast-kg}, \eqref{eq16-08-10-2020} and \eqref{eq17-08-10-2020}, $$ \aligned c^2|\phit^k|_{N-4}\leq& C(s/t)^2|\del \phit^k|_{N-3} + C|\Box \phit^k + c^2\phit^k|_{N-4} \\ \leq& CC_1\vep(s/t)^2s^{-1} + CC_1\vep|\phit^k|_{N-4} + C(C_1\vep)^3(s/t)^2s^{-3+3\delta}. \endaligned $$ when $CC_1\vep\leq \frac{c^2}{2}$, \eqref{eq6-18-10-2020} is concluded. Then recall \eqref{eq16-08-10-2020} and above bound \eqref{eq6-18-10-2020}, \begin{equation}\label{eq7-18-10-2020} |S^w[\del_{\alpha}\del^IL^J w]|\leq C|\phit^k|_{N-4}|\del \phit^k|_{N-4}\leq C|\phit^k|_{N-4}|\phit^k|_{N-3}\leq C(C_1\vep)^2(s/t)^2t^{-1}. \end{equation} Now we apply \eqref{eq1-29-05-2020} on $$ \Box \del^IL^J w = -\sum_{k=2}^n\del^IL^J \big(|\phit^k|^2\big). $$ Substitute the above bounds \eqref{eq8-18-10-2020}, \eqref{eq7-18-10-2020} into \eqref{eq1-29-05-2020} and consider a point $(\bar{t},\bar{x})\in \Hcal_{\bar{s}}$, $$ \aligned &\bar{s}|\del_t\del_{\alpha} w|(\bar{t},\bar{x}) \\ \leq& CC_0\vep + C(C_1\vep)^2\int_2^{\bar{t}} (s/t)^2 t^{-1}\big|_{\gamma(t;\bar{t},\bar{x})} e^{-\int_{t}^{\bar{t}}P_{\bar{t},\bar{x}}(\eta)d\eta} + CC_1\vep \int_2^{\bar{t}} t^{-2+\delta}dt \\ \leq& CC_1\vep + C(C_1\vep)^2\int_2^{\bar{t}}(s/t)^2t^{-1}\big|_{\gamma(t;\bar{t},\bar{x})}e^{-\int_{t}^{\bar{t}}(s/t)^2t^{-1}\big|_{\gamma(\eta;\bar{t},\bar{x})}d\eta}dt \\ \leq& CC_1\vep. \endaligned $$ So we conclude that \begin{equation}\label{eq9-18-10-2020} |\del_t\del_{\alpha}\del^IL^J w|\leq CC_1\vep s^{-1}. \end{equation} Similar to the argument applied for \eqref{eq13-01-09-2020}, we conclude by \eqref{eq15b-08-10-2020}. \subsection{Improved energy bounds and conclusion} For \eqref{eq7-05-10-2020}, we follow a similar argument in Subsection \ref{subsec-model-improved}. Thanks to \eqref{eq16-08-10-2020}, \begin{equation}\label{eq9-09-10-2020} \aligned &\Ecal_0^p(s,w)^{1/2} + \sum_{\alpha}\Ecal_0^p(s,\del_{\alpha} w)^{1/2} \\ \leq& \Ecal_0^p(2,w)^{1/2} + \sum_{\alpha}\Ecal_0^p(2,\del_{\alpha} w)^{1/2} + CC_1\vep \sum_{k=2}^n\int_2^s\tau^{-1}\Ecal_{0,1}^p(\tau, \phit^k)^{1/2}d\tau. \endaligned \end{equation} This is due to the following bound combined with Proposition \ref{prop 1 energy}: $$ \||(\phit^k)^2|_p\|_{L^2(\Hcal_s)} + \||(\phit^k\del\phit^k)|_p\|_{L^2(\Hcal_s)} \leq CC_1\vep s^{-1}\sum_{k=2}^n\Ecal_{0,1}^p(s,\phit^k)^{1/2}. $$ The bounds on $\phit^k$ is similar. First, one has integrable $L^2$ bounds \eqref{eq1-06-10-2020} on higher-order terms $S_{KG}^k[\phi]$. Second, the term $vw_0$ does not appear. So we make the following bounds: \begin{equation}\label{eq13-09-10-2020} \||\phit^k\del w_0|_p\|_{L^2(\Hcal_s)}\leq CC_1\vep s^{-1}\sum_{k=2}^n\Ecal_{0,1}^p(s,\phit^k)^{1/2} + C(C_1\vep)^2s^{-2+4\delta}. \end{equation} \begin{equation}\label{eq14-09-10-2020} \aligned \||\phit^k\del\del w|_p\|_{L^2(\Hcal_s)}\leq& CC_1\vep s^{-1}\Big(\sum_{k=2}^n\Ecal_{0,1}^p(s,\phit^k)^{1/2} + \sum_{\alpha}\Ecal_0^p(s,\del_{\alpha}w)^{1/2}\Big). \endaligned \end{equation} For \eqref{eq13-09-10-2020}, remark that $$ \aligned \||\phit^k\del w_0|_p\|_{L^2(\Hcal_s)}\leq& C\||\phit^k|_p|\del w_0|_{N-2}\|_{L^2(\Hcal_s)} + C\||\phit^k|_{N-4}|\del w_0|_p\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\||\phit^k|_p\|_{L^2(\Hcal_s)} + CC_1\vep s^{-2 + \delta}\|(s/t)^2s|\del w_0|_p\|_{L^2(\Hcal_s)} \\ \leq&CC_1\vep s^{-1} \sum_{k=2}^n\Ecal_{0,1}^p(s,\phit^k)^{1/2} + C(C_1\vep)^2 s^{-2 + 4\delta} \endaligned $$ where \eqref{eq1-07-10-2020}, \eqref{eq25-07-10-2020} and \eqref{eq3-18-10-2020} are applied. For \eqref{eq14-09-10-2020} $$ \aligned &\||\phit^k\del\del w|_p\|_{L^2(\Hcal_s)} \\ \leq& C\||\phit^k|_p|\del\del w|_{N-4}\|_{L^2(\Hcal_s)} + C\||\phit^k|_{N-3}|\del\del w|_p\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\||\phit^k|_p\|_{L^2(\Hcal_s)} + CC_1\vep s^{-1}\|(s/t)|\del\del w|_p\|_{L^2(\Hcal_s)} \\ \leq& CC_1\vep s^{-1}\Big(\sum_{k=2}^n\Ecal_{0,1}^p(s,\phit^k)^{1/2} + \sum_{\alpha}\Ecal_0^p(s,\del_{\alpha}w)^{1/2}\Big). \endaligned $$ Here for the second inequality, \eqref{eq16-08-10-2020} and \eqref{eq15b-08-10-2020} are applied. Now recall \eqref{eq1-06-10-2020} and substitute all these $L^2$ bounds into Proposition \ref{prop 1 energy}, we obtain: \begin{equation} \aligned \sum_{k=2}^n\Ecal_{0,1}(s,\phit^k)^{1/2}\leq& \sum_{k=2}^n\Ecal_{0,1}(2,\phit^k)^{1/2} + C(C_1\vep)^2 \\ & + CC_1\vep \int_2^s\tau^{-1}\Big(\sum_{k=2}^n\Ecal_{0,1}^p(\tau,\phit^k)^{1/2} + \sum_{\alpha}\Ecal_0^p(s,\del_{\alpha}w)^{1/2}\Big) d\tau. \endaligned \end{equation} Again, let $$ A^p(s) := \max\Big\{\sum_\alpha\Ecal_0^p(s,\del_{\alpha}w)^{1/2}, \Ecal_0^p(s,w)^{1/2},\sum_{k=2}^n\Ecal_{0,1}^p(s,\phit^k)^{1/2}\Big\}. $$ Then for $0\leq p\leq N$, \begin{equation}\label{eq16-09-10-2020} \aligned A^p(s)\leq& C_0\vep + C(C_1\vep)^2 + CC_1\vep\int_2^s\tau^{-1}A^p(\tau)d\tau \endaligned \end{equation} So we conclude, thanks to Gronwall's inequality, the energy bounds by \begin{equation}\label{eq19-09-10-2020} A^{N}(s)\leq \big(C_0\vep + C(C_1\vep)^2\big)s^{CC_1\vep}. \end{equation} \subsection{Conclusion of the bootstrap argument} Now we are ready to improve the bootstrap bounds. \eqref{eq6-05-10-2020} is improved by \eqref{eq26-07-10-2020}. More precisely, if we take \begin{equation}\label{eq22a-09-10-2020} C_1\geq 2C_0,\quad \vep\leq \sqrt{\frac{C_1-2C_0}{2CC_1^3}}, \end{equation} then \eqref{eq26-07-10-2020} leads to \eqref{eq8-05-10-2020}. Furthermore, taking \begin{equation}\label{eq22b-09-10-2020} \vep\leq \frac{c^2}{2CC_1} \end{equation} in order to guarantee \eqref{eq20-09-10-2020}. Then taking \begin{equation}\label{eq22c-09-10-2020} \vep\leq \frac{C_1 - 2C_0}{2CC_1^{3/2}}, \quad \vep \leq \delta/CC_1, \end{equation} \eqref{eq7-05-10-2020} is guaranteed. Then taking $\vep_0$ to be the minimum of the above five quantity, the desired stability result is established.
proofpile-arXiv_059-15739
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Describing space-time as a continuum is a very successful concept in physics. Various considerations, however, suggest that the Planck length limits the accuracy of spatial measurements \cite{Mead:1966zz,Garay:1995}. This uncertainty in space-time suggests that describing space-time as a continuum is no longer appropriate for small space-time distances.\footnote{In his inaugural lecture, Bernhard Riemann had already considered the necessity to modify the geometry of space if spacings get smaller and smaller.} In quantum mechanics, Heisenberg's commutation relations for position coordinates and momentum coordinates imply that the state of a particle in phase space is not accurately determined. We can assume analogously that the uncertainty in position measurements is due to a noncommutativity of position operators \begin{equation} \lbrack\hspace{0.01in}\hat{x}^{i},\hat{x}^{j}]=\theta^{ij}(\hat{x}). \label{VerRelKooAlg \end{equation} H.~Snyder was one of the first who tried to construct a quantized space-time with non-com\-mut\-ing position coordinates \cite{Snyder:1947a}. In recent times the focus has been on noncommutative space-time algebras with $\theta^{ij \in\mathbb{C}$ \cite{Doplicher:1994zv,Chu:1998qz,Schomerus:1999ug,Grimstrup_2002}. Furthermore, space-time algebras with the commutator in Eq.~(\ref{VerRelKooAlg}) being a linear function of space-time coordinates, i.~e. $\theta^{ij}(\hat{x})=\Theta_{k}^{ij}\hspace{0.01in}\hat{x}^{k}$, are also of particular interest \cite{Lukierski:1991pn,Majid:1994cy}. We consider, however, the $q$-de\-formed Euclidean space \cite{Faddeev:1987ih}, which is a noncommutative space with quadratic relations, i.~e. $\theta^{ij}(\hat {x})=\Theta_{kl}^{ij}\hspace{0.01in}\hat{x}^{k}\hat{x}^{l}$. The commutation relations for the coordinates of $q$-de\-formed Euclidean space satisfy the Poincar\'{e}-Birkhoff-Witt property. It says that each vector space generated by homogeneous polynomials with a fixed degree has the same dimension as in the commutative case with $\theta^{ij}(\hat{x})=0$. Thus the nor\-mal-or\-dered monomials of noncommutative coordinates form a basis of $q$-de\-formed Euclidean space. For this reason, we can associate the noncommutative algebra of $q$-de\-formed Euclidean space with a commutative coordinate algebra by using the star-prod\-uct formalism \cite{Moyal:1949sk}. The star-prod\-uct formalism enables us to construct a $q$-de\-formed version of mathematical analysis \cite{Carnovale:1999,Wachter:2007A}. In Ref.~\cite{Wachter:2019A}, we have discussed $q$-de\-formed momentum eigenfunctions within the framework of this $q$-de\-formed analysis. We have shown in Ref.~\cite{Wachter:2020A} that the time evolution operator of a quantum system in $q$-de\-formed Euclidean space is of the same form as in the undeformed case. In this article, we are going to apply our findings to a nonrelativistic particle in $q$-de\-formed Euclidean space. First, we give $q$-analogs for the Hamilton operator of a free, nonrelativistic particle. Next, we construct $q$-de\-formed plane wave solutions to the corresponding Schr\"{o}\-dinger equations. We also show that the $q$-de\-formed plane waves form a complete orthonormal set of functions. This fact enables us to write down $q$-de\-formed versions of the propagator for a nonrelativistic particle. Finally, we show how to calculate expectation values of position or momentum with the solutions of our $q$-de\-formed Schr\"{o}\-din\-ger equations. \section{Preliminaries} \subsection{Star-products\label{KapQuaZeiEle}} The three-di\-men\-sion\-al $q$-de\-formed Euclidean space $\mathbb{R}_{q}^{3}$ has the generators $X^{+}$, $X^{3}$, and $X^{-}$, subject to the following commutation relations \cite{Lorek:1997eh} \begin{align} X^{3}X^{+} & =q^{2}X^{+}X^{3},\nonumber\\ X^{3}X^{-} & =q^{-2}X^{-}X^{3},\nonumber\\ X^{-}X^{+} & =X^{+}X^{-}+(q-q^{-1})\hspace{0.01in}X^{3}X^{3}. \label{RelQuaEukDre \end{align} We can extend the algebra of $\mathbb{R}_{q}^{3}$ by a time element $X^{0}$, which commutes with the generators $X^{+}$, $X^{3}$, and $X^{-}$ \cite{Wachter:2020A} \begin{equation} X^{0}X^{A}=X^{A}X^{0},\text{\qquad}A\in\{+,3,-\}. \label{ZusRelExtDreEukQUa \end{equation} In the following, we refer to the algebra spanned by the generators $X^{i}$ with $i\in\{0,+,3,-\}$ as $\mathbb{R}_{q}^{3,t}$. There is a $q$-analog of the three-di\-men\-sion\-al Euclidean metric $g^{AB}$ with its inverse $g_{AB}$ \cite{Lorek:1997eh} (rows and columns are arranged in the order $+,3,-$) \begin{equation} g_{AB}=g^{AB}=\left( \begin{array} [c]{ccc 0 & 0 & -\hspace{0.01in}q\\ 0 & 1 & 0\\ -\hspace{0.01in}q^{-1} & 0 & 0 \end{array} \right) . \end{equation} We can use the $q$-de\-formed metric to raise and lower indices \begin{equation} X_{A}=g_{AB}\hspace{0.01in}X^{B},\qquad X^{A}=g^{AB}X_{B}. \label{HebSenInd \end{equation} The algebra $\mathbb{R}_{q}^{3,t}$ has a semilinear, involutive, and anti-multiplicative mapping, which we call \textit{quantum space conjugation}. If we indicate conjugate elements of a quantum space by a bar,\footnote{A bar over a complex number indicates complex conjugation.} we can write the properties of quantum space conjugation as follows ($\alpha,\beta\in \mathbb{C}$ and $u,v\in\mathbb{R}_{q}^{3,t}$) \begin{equation} \overline{\alpha\,u+\beta\,v}=\overline{\alpha}\,\overline{u}+\overline{\beta }\,\overline{v},\quad\overline{\overline{u}}=u,\quad\overline{u\,v =\overline{v}\,\overline{u}. \end{equation} The conjugation for $\mathbb{R}_{q}^{3,t}$ is compatible with the commutation relations in Eq.~(\ref{RelQuaEukDre}) and Eq.~(\ref{ZusRelExtDreEukQUa}) if the following applies \cite{Wachter:2020A} \begin{equation} \overline{X^{A}}=X_{A}=g_{AB}\hspace{0.01in}X^{B},\qquad\overline{X^{0}}=X_{0}. \label{ConSpaKoo \end{equation} We can only prove a physical theory if it predicts measurement results. The problem, however, is: How can we associate the elements of the noncommutative space $\mathbb{R}_{q}^{3,t}$ with real numbers? One solution to this problem is to introduce a vector space isomorphism between the noncommutative algebra $\mathbb{R}_{q}^{3,t}$ and a corresponding commutative coordinate algebra $\mathbb{C}[\hspace{0.01in}x^{+},x^{3},x^{-},t\hspace{0.01in}]$. We recall that the nor\-mal-or\-dered monomials in the generators $X^{i}$ form a basis of the algebra $\mathbb{R}_{q}^{3,t}$, i.~e. we can write each element $F\in$ $\mathbb{R}_{q}^{3,t}$ uniquely as a finite or infinite linear combination of monomials with a given normal ordering (\textit{Poincar\'{e}-Birkhoff-Witt property}) \begin{equation} F=\sum\limits_{n_{+},\ldots,\hspace{0.01in}n_{0}}a_{\hspace{0.01in}n_{+ \ldots\hspace{0.01in}n_{0}}\,(X^{+})^{n_{+}}(X^{3})^{n_{3}}(X^{-})^{n_{- }(X^{0})^{n_{0}},\quad\quad a_{\hspace{0.01in}n_{+}\ldots\hspace{0.01in}n_{0 }\in\mathbb{C}. \end{equation} Since the monomials $(x^{+})^{n_{+}}(x^{3})^{n_{3}}(x^{-})^{n_{-} t^{\hspace{0.01in}n_{0}}$ with $n_{+},\ldots,n_{0}\in\mathbb{N}_{0}$ form a basis of the commutative algebra $\mathbb{C}[\hspace{0.01in}x^{+},x^{3 ,x^{-},t\hspace{0.01in}]$, we can define a vector space isomorphis \begin{equation} \mathcal{W}:\mathbb{C}[\hspace{0.01in}x^{+},x^{3},x^{-},t\hspace{0.01in}]\rightarrow \mathbb{R}_{q}^{3,t} \label{VecRauIsoInv \end{equation} wit \begin{equation} \mathcal{W}\left( (x^{+})^{n_{+}}(x^{3})^{n_{3}}(x^{-})^{n_{-} t^{\hspace{0.01in}n_{0}}\right) =(X^{+})^{n_{+}}(X^{3})^{n_{3}}(X^{- )^{n_{-}}(X^{0})^{n_{0}}. \label{StePro0 \end{equation} In general, we hav \begin{equation} \mathbb{C}[\hspace{0.01in}x^{+},x^{3},x^{-},t\hspace{0.01in}]\ni f\mapsto F\in\mathbb{R _{q}^{3,t}, \end{equation} wher \begin{align} f & =\sum\limits_{n_{+},\ldots,\hspace{0.01in}n_{0}}a_{\hspace{0.01in n_{+}\ldots\hspace{0.01in}n_{0}}\,(x^{+})^{n_{+}}(x^{3})^{n_{3}}(x^{- )^{n_{-}}t^{\hspace{0.01in}n_{0}},\nonumber\\ F & =\sum\limits_{n_{+},\ldots,\hspace{0.01in}n_{0}}a_{\hspace{0.01in n_{+}\ldots\hspace{0.01in}n_{0}}\,(X^{+})^{n_{+}}(X^{3})^{n_{3}}(X^{- )^{n_{-}}(X^{0})^{n_{0}}. \label{AusFfNorOrd \end{align} The vector space isomorphism $\mathcal{W}$ is nothing else but the \textit{Moyal-Weyl mapping}, which gives an operator $F$ to a complex valued function $f$ \cite{Bayen:1977ha,1997q.alg.....9040K,Madore:2000en,Moyal:1949sk}. We can extend this vector space isomorphism to an algebra isomorphism if we introduce a new product on the commutative coordinate algebra. This so-called \textit{star-prod\-uct }symbolized by $\circledast$ satisfies the following homomorphism condition \begin{equation} \mathcal{W}\left( f\circledast g\right) =\mathcal{W}\left( f\right) \cdot\mathcal{W}\left( \hspace{0.01in}g\right) . \label{HomBedWeyAbb \end{equation} Since the Mo\-yal-Weyl mapping is invertible, we can write the star-prod\-uct as follows \begin{equation} f\circledast g=\mathcal{W}^{\hspace{0.01in}-1}\big (\,\mathcal{W}\left( f\right) \cdot\mathcal{W}\left( \hspace{0.01in}g\right) \big ). \label{ForStePro \end{equation} To get explicit formulas for calculating star-prod\-ucts, we first have to write a noncommutative product of two nor\-mal-or\-dered monomials as a linear combination of nor\-mal-or\-dered monomials again (see Ref.~\cite{Wachter:2002A} for details) \begin{equation} (X^{+})^{n_{+}}\ldots\hspace{0.01in}(X^{0})^{n_{0}}\cdot(X^{+})^{m_{+} \ldots\hspace{0.01in}(X^{0})^{m_{0}}=\sum_{\underline{k}\hspace{0.01in}=\hspace{0.01in 0}B_{\underline{k}}^{\hspace{0.01in}\underline{n},\underline{m}}\,(X^{+})^{k_{+}}\ldots\hspace{0.01in (X^{0})^{k_{0}}. \label{EntProMon \end{equation} We achieve this by using the commutation relations for the noncommutative coordinates [cf. Eq.~(\ref{RelQuaEukDre})]. From the concrete form of the series expansion in\ Eq.~(\ref{EntProMon}), we can finally read off a formula to calculate the star-prod\-uct of two power series in commutative space-time coordinates ($\lambda=q-q^{-1}$):\ \begin{gather} f(\mathbf{x},t)\circledast g(\mathbf{x},t)=\nonumber\\ \sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty}\lambda^{k}\hspace {0.01in}\frac{(x^{3})^{2k}}{[[k]]_{q^{4}}!}\,q^{2(\hat{n}_{3}\hspace {0.01in}\hat{n}_{+}^{\prime}+\,\hat{n}_{-}\hat{n}_{3}^{\prime}) D_{q^{4},\hspace{0.01in}x^{-}}^{k}f(\mathbf{x},t)\,D_{q^{4},\hspace {0.01in}x^{\prime+}}^{k}g(\mathbf{x}^{\prime},t)\big|_{x^{\prime \rightarrow\hspace{0.01in}x}. \label{StaProForExp \end{gather} The argument $\mathbf{x}$ indicates a dependence on the spatial coordinates $x^{+}$, $x^{3}$, and $x^{-}$. Note that the expression\ above depends on the operator \begin{equation} \hat{n}_{A}=x^{A}\frac{\partial}{\partial x^{A} \end{equation} and the so-called Jackson derivatives \cite{Jackson:1910yd} \begin{equation} D_{q^{k},\hspace{0.01in}x}\,f=\frac{f(q^{k}x)-f(x)}{q^{k}x-x}. \end{equation} Moreover, the $q$-numbers are given b \begin{equation} \lbrack\lbrack a]]_{q}=\frac{1-q^{a}}{1-q}, \end{equation} and the $q$-factorials are defined in complete analogy to the undeformed case \begin{equation} \lbrack\lbrack\hspace{0.01in}n]]_{q}!=[[1]]_{q}\hspace{0.01in}[[2]]_{q \ldots\lbrack\lbrack\hspace{0.01in}n-1]]_{q}\hspace{0.01in}[[\hspace {0.01in}n]]_{q},\qquad\lbrack\lbrack0]]_{q}!=1. \end{equation} The algebra isomorphism $\mathcal{W}^{-1}$ also enables us to carry over the conjugation for the quantum space algebra $ \mathbb{R}_{q}^{3,t}$ to the commutative coordinate algebra $\mathbb{C}[\hspace{0.01in}x^{+},x^{3},x^{-},t\hspace{0.01in}]$. In other words, the mapping $\mathcal{W ^{\hspace{0.01in}-1}$ is a $\ast$-al\-ge\-bra homomorphism \begin{equation} \mathcal{W}(\hspace{0.01in}\overline{f}\hspace{0.01in})=\overline {\mathcal{W}(f)}\qquad\Leftrightarrow\text{\qquad}\overline{f}=\mathcal{W ^{-1}\big (\hspace{0.01in}\overline{\mathcal{W}(f)}\hspace{0.01in}\big ). \label{ConAlgIso \end{equation} This relationship implies the following property for the star-pro\-duct \begin{equation} \overline{f\circledast g}=\overline{g}\circledast\overline{f}. \label{KonEigSteProFkt \end{equation} With $\bar{f}$, we designate the power series obtained from $f$ by quantum space conjugation. If $\bar{a}_{n_{+},n_{3},n_{-},n_{0}}$ stands for the complex conjugate of $a_{n_{+},n_{3},n_{-},n_{0}}$, Eqs.~(\ref{ConSpaKoo})\ and (\ref{ConAlgIso}) yield that $\bar{f}$ takes the following form \cite{Wachter:2007A,Wachter:2020A} \begin{align} \overline{f(\mathbf{x},t)} & =\sum\nolimits_{\underline{n}}\bar{a}_{\hspace{0.01in}n_{+},n_{3 ,n_{-},n_{0}}\,(-\hspace{0.01in}q\hspace{0.01in}x^{-})^{n_{+}}(\hspace {0.01in}x^{3})^{n_{3}}(-\hspace{0.01in}q^{-1}x^{+})^{n_{-}}t^{n_{0 }\nonumber\\ & =\sum\nolimits_{\underline{n}}(-\hspace{0.01in}q)^{n_{-}-\hspace{0.01in}n_{+} \hspace{0.01in}\bar{a}_{\hspace{0.01in}n_{-},n_{3},n_{+},n_{0}}\,(\hspace{0.01in x^{+})^{n_{+}}(\hspace{0.01in}x^{3})^{n_{3}}(\hspace{0.01in}x^{-})^{n_{- }t^{n_{0}}\nonumber\\ & =\bar{f}(\mathbf{x},t). \label{KonPotReiKom \end{align} \subsection{Partial derivatives and integrals\label{KapParDer}} There are partial derivatives for $q$-de\-formed space-time coordinates \cite{CarowWatamura:1990zp,Wess:1990vh}. These partial derivatives again form a quantum space with the same algebraic structure as that of the $q$-de\-formed space-time coordinates. Thus, the $q$-de\-formed partial derivatives $\partial_{i}$ satisfy the same commutation relations as the covariant coordinate generators $X_{i}$ \begin{gather} \partial_{0}\hspace{0.01in}\partial_{+}=\hspace{0.01in}\partial_{+ \hspace{0.01in}\partial_{0},\quad\partial_{0}\hspace{0.01in}\partial _{-}=\hspace{0.01in}\partial_{-}\hspace{0.01in}\partial_{0},\quad\partial _{0}\hspace{0.01in}\partial_{3}=\partial_{3}\hspace{0.01in}\partial _{0},\nonumber\\ \partial_{+}\hspace{0.01in}\partial_{3}=q^{2}\partial_{3}\hspace {0.01in}\partial_{+},\quad\partial_{3}\hspace{0.01in}\partial_{- =\hspace{0.01in}q^{2}\partial_{-}\hspace{0.01in}\partial_{3},\nonumber\\ \partial_{+}\hspace{0.01in}\partial_{-}-\partial_{-}\hspace{0.01in \partial_{+}=\hspace{0.01in}\lambda\hspace{0.01in}\partial_{3}\hspace {0.01in}\partial_{3}. \end{gather} The commutation relations above are invariant under conjugation if the derivatives show the following conjugation properties:\footnote{The indices of partial derivatives are raised and lowered in the same way as those of coordinates [see Eq.~(\ref{HebSenInd}) in Chap.~\ref{KapQuaZeiEle}]. \begin{equation} \overline{\partial_{A}}=-\hspace{0.01in}\partial^{A}=-g^{AB}\partial _{B},\qquad\overline{\partial_{0}}=-\hspace{0.01in}\partial^{0}=-\hspace {0.01in}\partial_{0}. \label{KonAbl \end{equation} There are two ways of commuting $q$-de\-formed partial derivatives with $q$-de\-formed space-time coordinates. One is given by the following $q$-de\-formed Leibniz rules \cite{CarowWatamura:1990zp,Wess:1990vh,Wachter:2020A} \begin{align} \partial_{B}X^{A} & =\delta_{B}^{A}+q^{4}\hat{R}{^{AC}}_{BD}\,X^{D \partial_{C},\nonumber\\ \partial_{A}X^{0} & =X^{0}\hspace{0.01in}\partial_{A},\nonumber\\ \partial_{0}\hspace{0.01in}X^{A} & =X^{A}\hspace{0.01in}\partial _{0},\nonumber\\ \partial_{0}\hspace{0.01in}X^{0} & =1+X^{0}\hspace{0.01in}\partial_{0}. \label{DifKalExtEukQuaDreUnk \end{align} Note that $\hat{R}{^{AC}}_{BD}$ denotes the vector representation of the R-matrix for the three-di\-men\-sion\-al $q$-de\-formed Euclidean space. By conjugation, we can obtain the Leibniz rules for another differential calculus from the identities in Eq.~(\ref{DifKalExtEukQuaDreUnk}). Introducing $\hat{\partial}_{A}=q^{6}\partial_{A}$ and $\hat{\partial}_{0}=\partial_{0}$, we can write the Leibniz rules of this second differential calculus in the following form \begin{align} \hat{\partial}_{B}\hspace{0.01in}X^{A} & =\delta_{B}^{A}+q^{-4}(\hat{R ^{-1}){^{AC}}_{BD}\,X^{D}\hat{\partial}_{C},\nonumber\\ \hat{\partial}_{A}\hspace{0.01in}X^{0} & =X^{0}\hspace{0.01in}\hat{\partial }_{A},\nonumber\\ \hat{\partial}_{0}\hspace{0.01in}X^{A} & =X^{A}\hspace{0.01in}\hat{\partial }_{0},\nonumber\\ \hat{\partial}_{0}\hspace{0.01in}X^{0} & =1+X^{0}\hspace{0.01in \hat{\partial}_{0}. \label{DifKalExtEukQuaDreKon \end{align} Using the Leibniz rules in Eq.$~$(\ref{DifKalExtEukQuaDreUnk}) or Eq.$~$(\ref{DifKalExtEukQuaDreKon}), we can calculate how partial derivatives act on nor\-mal-or\-dered monomials of noncommutative coordinates. We can carry over these actions to commutative coordinate monomials with the help of the Mo\-yal-Weyl mapping \begin{equation} \partial^{i}\triangleright(x^{+})^{n_{+}}(x^{3})^{n_{3}}(x^{-})^{n_{- }t^{\hspace{0.01in}n_{0}}=\mathcal{W}^{\hspace{0.01in}-1}\big (\partial ^{i}\triangleright(X^{+})^{n_{+}}(X^{3})^{n_{3}}(X^{-})^{n_{-}}(X^{0})^{n_{0 }\big ). \end{equation} Since the Mo\-yal-Weyl mapping is linear, we can apply the action above to space-time functions that can be written as a power series \begin{equation} \partial^{i}\triangleright f(\mathbf{x},t)=\mathcal{W}^{\hspace{0.01in -1}\big (\partial^{i}\triangleright\mathcal{W}(f(\mathbf{x},t))\big ). \end{equation} If we use the ordering given in Eq.~(\ref{StePro0}) of the previous chapter, the Leibniz rules in Eq.~(\ref{DifKalExtEukQuaDreUnk})\ lead to the following operator representations \cite{Bauer:2003} \begin{align} \partial_{+}\triangleright f(\mathbf{x},t) & =D_{q^{4},\hspace{0.01in}x^{+ }f(\mathbf{x},t),\nonumber\\ \partial_{3}\triangleright f(\mathbf{x},t) & =D_{q^{2},\hspace{0.01in}x^{3 }f(q^{2}x^{+},x^{3},x^{-},t),\nonumber\\ \partial_{-}\triangleright f(\mathbf{x},t) & =D_{q^{4},\hspace{0.01in}x^{- }f(x^{+},q^{2}x^{3},x^{-},t)+\lambda\hspace{0.01in}x^{+}D_{q^{2 ,\hspace{0.01in}x^{3}}^{2}f(\mathbf{x},t). \label{UnkOpeDarAbl \end{align} The derivative $\partial_{0}$, however, is represented on the commutative space-time algebra by an ordinary partial derivative \begin{equation} \partial_{0}\triangleright\hspace{-0.01in}f(\mathbf{x},t)=\frac{\partial f(\mathbf{x},t)}{\partial t}. \label{OpeDarZeiAblExtQuaEuk \end{equation} Using the Leibniz rules in Eq.$~$(\ref{DifKalExtEukQuaDreKon}), we get operator representations for the partial derivatives $\hat{\partial}_{i}$. The Leibniz rules in Eq.$~$(\ref{DifKalExtEukQuaDreUnk}) and Eq.$~ (\ref{DifKalExtEukQuaDreKon}) are transformed into each other by the following substitutions \begin{gather} q\rightarrow q^{-1},\quad X^{-}\rightarrow X^{+},\quad X^{+}\rightarrow X^{-},\nonumber\\ \partial^{\hspace{0.01in}+}\rightarrow\hat{\partial}^{\hspace{0.01in}- ,\quad\partial^{\hspace{0.01in}-}\rightarrow\hat{\partial}^{\hspace{0.01in +},\quad\partial^{\hspace{0.01in}3}\rightarrow\hat{\partial}^{\hspace {0.01in}3},\quad\partial^{\hspace{0.01in}0}\rightarrow\hat{\partial ^{\hspace{0.01in}0}. \label{UebRegGedUngAblDreQua \end{gather} For this reason, we obtain the operator representations of the partial derivatives $\hat{\partial}_{A}$ from those of the partial derivatives $\partial_{A}$ [cf. Eq.~(\ref{UnkOpeDarAbl})] if we replace $q$ by $q^{-1}$ and exchange the indices $+$ and $-$ \begin{align} \hat{\partial}_{-}\,\bar{\triangleright}\,f(\mathbf{x},t) & =D_{q^{-4 ,\hspace{0.01in}x^{-}}f(\mathbf{x},t),\nonumber\\ \hat{\partial}_{3}\,\bar{\triangleright}\,f(\mathbf{x},t) & =D_{q^{-2 ,\hspace{0.01in}x^{3}}f(q^{-2}x^{-},x^{3},x^{+},t),\nonumber\\ \hat{\partial}_{+}\,\bar{\triangleright}\,f(\mathbf{x},t) & =D_{q^{-4 ,\hspace{0.01in}x^{+}}f(x^{-},q^{-2}x^{3},x^{+},t)-\lambda\hspace{0.01in x^{-}D_{q^{-2},\hspace{0.01in}x^{3}}^{2}f(\mathbf{x},t). \label{KonOpeDarAbl \end{align} Once again, $\hat{\partial}_{0}$ is represented on the commutative space-time algebra by an ordinary partial derivative \begin{equation} \hat{\partial}_{0}\,\bar{\triangleright}\,f(\mathbf{x},t)=\frac{\partial f(\mathbf{x},t)}{\partial t}. \label{OpeDarZeiAblExtQuaEukKon \end{equation} Due to the substitutions given in\ Eq.~(\ref{UebRegGedUngAblDreQua}), the actions in Eqs.~(\ref{KonOpeDarAbl}) and (\ref{OpeDarZeiAblExtQuaEukKon}) refer to nor\-mal-or\-dered monomials different from those in Eq.~(\ref{StePro0}) of the previous chapter \begin{equation} \widetilde{\mathcal{W}}\left( t^{\hspace{0.01in}n_{0}}(x^{+})^{n_{+} (x^{3})^{n_{3}}(x^{-})^{n_{-}}\right) =(X^{0})^{n_{0}}(X^{-})^{n_{-} (X^{3})^{n_{3}}(X^{+})^{n_{+}}. \label{UmNor \end{equation} We should not forget that we can also commute $q$-de\-formed partial derivatives from the \textit{right} side of a nor\-mal-or\-dered monomial to the left side by using the Leibniz rules. This way, we get\ the so-called \textit{right}-re\-pre\-sen\-ta\-tions of partial derivatives, for which we write $f\,\bar{\triangleleft}\,\partial^{i}$ or $f\triangleleft\hat{\partial}^{i}$. Note that the operation of conjugation transforms left actions of partial derivatives into right actions and vice versa \cite{Bauer:2003} \begin{align} \overline{\partial^{i}\triangleright f} & =-\bar{f}\,\bar{\triangleleft }\,\partial_{i}, & \overline{f\,\bar{\triangleleft}\,\partial^{i}} & =-\hspace{0.01in}\partial_{i}\triangleright\bar{f},\nonumber\\ \overline{\hat{\partial}^{i}\,\bar{\triangleright}\,f} & =-\bar {f}\triangleleft\hat{\partial}_{i}, & \overline{f\triangleleft\hat{\partial }^{i}} & =-\hspace{0.01in}\hat{\partial}_{i}\,\bar{\triangleright}\,\bar{f}. \label{RegConAbl \end{align} In general, the operator representations in Eqs.~(\ref{UnkOpeDarAbl}) and (\ref{KonOpeDarAbl}) consist of two terms, which we call $\partial _{\operatorname*{cla}}^{A}$ and $\partial_{\operatorname*{cor}}^{A}$ \begin{equation} \partial^{A}\triangleright F=\left( \partial_{\operatorname*{cla} ^{A}+\partial_{\operatorname*{cor}}^{A}\right) \triangleright F. \end{equation} In the undeformed limit $q\rightarrow1$, $\partial_{\operatorname*{cla}}^{A}$ becomes an ordinary partial derivative, and $\partial_{\operatorname*{cor }^{A}$ disappears. We get a solution to the difference equation $\partial ^{A}\triangleright F=f$ with given $f$ by using the following formula \cite{Wachter:2004A} \begin{align} F & =(\partial^{A})^{-1}\triangleright f=\left( \partial _{\operatorname*{cla}}^{A}+\partial_{\operatorname*{cor}}^{A}\right) ^{-1}\triangleright f\nonumber\\ & =\sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty}\left[ -(\partial _{\operatorname*{cla}}^{A})^{-1}\partial_{\operatorname*{cor}}^{A}\right] ^{k}(\partial_{\operatorname*{cla}}^{A})^{-1}\triangleright f. \end{align} Applying the above formula to the operator representations in Eq.~(\ref{UnkOpeDarAbl}), we ge \begin{align} (\partial_{+})^{-1}\triangleright f(\mathbf{x},t) & =D_{q^{4},\hspace {0.01in}x^{+}}^{-1}f(\mathbf{x},t),\nonumber\\ (\partial_{3})^{-1}\triangleright f(\mathbf{x},t) & =D_{q^{2},\hspace {0.01in}x^{3}}^{-1}f(q^{-2}x^{+},x^{3},x^{-},t), \label{InvParAbl1 \end{align} an \begin{gather} (\partial_{-})^{-1}\triangleright f(\mathbf{x},t)=\nonumber\\ =\sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty}q^{2k\left( k\hspace {0.01in}+1\right) }\left( -\lambda\,x^{+}D_{q^{4},\hspace{0.01in}x^{-} ^{-1}D_{q^{2},\hspace{0.01in}x^{3}}^{2}\right) ^{k}D_{q^{4},\hspace {0.01in}x^{-}}^{-1}f(x^{+},q^{-2\left( k\hspace{0.01in}+1\right) x^{3},x^{-},t). \label{InvParAbl2 \end{gather} Note that $D_{q,\hspace{0.01in}x}^{-1}$ stands for a Jackson integral with $x$ being the variable of integration \cite{Jackson:1908}. The explicit form of this Jackson integral depends on its limits of integration and the value for the deformation parameter $q$. If $x>0$ and $q>1$, for example, the following applies \begin{equation} \int_{0}^{\hspace{0.01in}x}\text{d}_{q}z\hspace{0.01in}f(z)=(q-1)\hspace {0.01in}x\sum_{j=1}^{\infty}q^{-j}f(q^{-j}x). \end{equation} Finally, the integral for the time coordinate is an ordinary integral since $\partial_{0}$ acts on the commutative space-time algebra like an ordinary partial derivative [cf. Eq.~(\ref{OpeDarZeiAblExtQuaEuk})] \begin{equation} (\partial_{0})^{-1}\triangleright f(\mathbf{x},t)\hspace{0.01in}=\int \text{d}t\,f(\mathbf{x},t). \end{equation} The above considerations also apply to the partial derivatives with a hat. However, we can obtain the representations of $\hat{\partial}_{i}$ from those of the derivatives $\partial_{i}$ if we replace $q$ with $q^{-1}$ and exchange the indices $+$ and $-$. Applying these substitutions to the expressions in Eqs.~(\ref{InvParAbl1}) and (\ref{InvParAbl2}), we immediately get the corresponding results for the partial derivatives $\hat{\partial}_{i}$. By successively applying the integral operators given in Eqs.~(\ref{InvParAbl1}) and (\ref{InvParAbl2}), we can explain an integration over all space \cite{Wachter:2004A,Wachter:2007A} \begin{equation} \int_{-\infty}^{+\infty}\text{d}_{q}^{3}x\,f(x^{+},x^{3},x^{-})=(\partial _{-})^{-1}\big |_{-\infty}^{+\infty}\,(\partial_{3})^{-1}\big |_{-\infty }^{+\infty}\,(\partial_{+})^{-1}\big |_{-\infty}^{+\infty}\triangleright f. \end{equation} On the right-hand side of the above relation,the different integral operators can be simplified to Jackson integrals \cite{Wachter:2004A,Jambor:2004ph} \begin{equation} \int_{-\infty}^{+\infty}\text{d}_{q}^{3}x\,f(\mathbf{x})=D_{q^{2 ,\hspace{0.01in}x^{-}}^{-1}\big |_{-\infty}^{+\infty}\,D_{q,x^{3} ^{-1}\big |_{-\infty}^{+\infty}\,D_{q^{2},\hspace{0.01in}x^{+}}^{-1 \big |_{-\infty}^{+\infty}\,f(\mathbf{x}). \end{equation} Note that the Jackson integrals in the formula above refer to a smaller $q$-lattice. Using such a smaller $q$-lattice ensures that our integral over all space is a scalar with trivial braiding properties \cite{Kempf:1994yd}. The $q$-integral over all space shows some significant features \cite{Wachter:2007A,Jambor:2004ph}. In this respect, $q$-de\-formed versions of \textit{Stokes' theorem} apply \begin{align} \int_{-\infty}^{+\infty}\text{d}_{q}^{3}x\,\partial^{A}\triangleright f & =\int_{-\infty}^{+\infty}\text{d}_{q}^{3}x\,f\,\bar{\triangleleft \,\partial^{A}=0,\nonumber\\ \int_{-\infty}^{+\infty}\text{d}_{q}^{3}x\,\hat{\partial}^{A}\,\bar {\triangleright}\,f & =\int_{-\infty}^{+\infty}\text{d}_{q}^{3 x\,f\triangleleft\hat{\partial}^{A}=0. \end{align} The $q$-de\-formed Stokes' theorem also implies rules for integration by parts \begin{align} \int_{-\infty}^{+\infty}\text{d}_{q}^{3}x\,f\circledast(\partial ^{A}\triangleright g) & =\int_{-\infty}^{+\infty}\text{d}_{q}^{3 x\,(f\triangleleft\partial^{A})\circledast g,\nonumber\\ \int_{-\infty}^{+\infty}\text{d}_{q}^{3}x\,f\circledast(\hat{\partial ^{A}\,\bar{\triangleright}\,g) & =\int_{-\infty}^{+\infty}\text{d}_{q ^{3}x\,(f\,\bar{\triangleleft}\,\hat{\partial}^{A})\circledast g. \label{PatIntUneRaumInt \end{align} Finally, we mention that the $q$-integral over all space behaves as follows under quantum space conjugation \begin{equation} \overline{\int_{-\infty}^{+\infty}\text{d}_{q}^{3}x\,f}=\int_{-\infty }^{+\infty}\text{d}_{q}^{3}x\,\bar{f}. \label{KonEigVolInt \end{equation} \subsection{Exponentials and Translations\label{KapExp}} A $q$-de\-formed exponential is an eigenfunction of each partial derivative of a given $q$-de\-formed quantum space \cite{Majid:1993ud,Schirrmacher:1995,Wachter:2004ExpA}. In the following, we consider $q$-de\-formed exponentials that are eigenfunctions for left actions or right actions of partial derivatives \begin{align} \text{i}^{-1}\partial^{A}\triangleright\exp_{q}(\mathbf{x}|\text{i}\mathbf{p}) & =\exp_{q}(\mathbf{x}|\text{i}\mathbf{p})\circledast p^{A},\nonumber\\ \exp_{q}(\text{i}^{-1}\mathbf{p}|\hspace{0.01in}\mathbf{x})\,\bar {\triangleleft}\,\partial^{A}\text{i}^{-1} & =p^{A}\circledast\exp _{q}(\text{i}^{-1}\mathbf{p}|\hspace{0.01in}\mathbf{x}).\label{EigGl1N \end{align} The above eigenvalue equations are shown graphically in Fig.~\ref{Fig1}. The $q$-ex\-po\-nen\-tials are uniquely defined by their eigenvalue equations and the following normalization conditions \begin{align} \exp_{q}(\mathbf{x}|\text{i}\mathbf{p})|_{x\hspace{0.01in}=\hspace{0.01in}0} & =\exp_{q}(\mathbf{x}|\text{i}\mathbf{p})|_{p\hspace{0.01in}=\hspace {0.01in}0}=1,\nonumber\\ \exp_{q}(\text{i}^{-1}\mathbf{p}|\hspace{0.01in}\mathbf{x})|_{x\hspace {0.01in}=\hspace{0.01in}0} & =\exp_{q}(\text{i}^{-1}\mathbf{p}|\hspace {0.01in}\mathbf{x})|_{p\hspace{0.01in}=\hspace{0.01in}0}=1.\label{NorBedExp \end{align \begin{figure} [ptb] \begin{center} \centerline{\psfig{figure=Fig1.eps,width=4.555in} \caption{Eigenvalue equations of $q$-exponentials. \label{Fig1 \end{center} \end{figure} Using the operator representation in Eq.~(\ref{UnkOpeDarAbl}) of the last chapter, we found the following expressions for the $q$-ex\-ponen\-tials of three-di\-men\-sion\-al Euclidean quantum space \cite{Wachter:2004ExpA} \begin{align} \exp_{q}(\mathbf{x}|\text{i}\mathbf{p}) & =\sum_{\underline{n \,=\,0}^{\infty}\frac{(x^{+})^{n_{+}}(x^{3})^{n_{3}}(x^{-})^{n_{-} (\text{i}p_{-})^{n_{-}}(\text{i}p_{3})^{n_{3}}(\text{i}p_{+})^{n_{+} }{[[\hspace{0.01in}n_{+}]]_{q^{4}}!\,[[\hspace{0.01in}n_{3}]]_{q^{2 }!\,[[\hspace{0.01in}n_{-}]]_{q^{4}}!},\nonumber\\ \exp_{q}(\text{i}^{-1}\mathbf{p}|\mathbf{x}) & =\sum_{\underline{n \,=\,0}^{\infty}\frac{(\text{i}^{-1}p^{+})^{n_{+}}(\text{i}^{-1}p^{3})^{n_{3 }(\text{i}^{-1}p^{-})^{n_{-}}(x_{-})^{n_{-}}(x_{3})^{n_{3}}(x_{+})^{n_{+} }{[[\hspace{0.01in}n_{+}]]_{q^{4}}!\,[[\hspace{0.01in}n_{3}]]_{q^{2 }!\,[[\hspace{0.01in}n_{-}]]_{q^{4}}}. \label{ExpEukExp \end{align} If we substitute $q$ with $q^{-1}$ in both expressions of Eq.~(\ref{ExpEukExp ), we get two more $q$-exponentials, which we designate $\overline{\exp _{q}(x|$i$\mathbf{p})$ and $\overline{\exp}_{q}($i$^{-1}\mathbf{p}|x)$. We obtain the eigenvalue equations and normalization conditions of these two $q$-exponentials by applying the following substitutions to Eqs.~(\ref{EigGl1N}) and (\ref{NorBedExp}) \begin{equation} \exp_{q}\rightarrow\hspace{0.01in}\overline{\exp}_{q},\qquad\triangleright\,\rightarrow \,\bar{\triangleright},\qquad\bar{\triangleleft}\,\rightarrow\,\triangleleft ,\qquad\partial^{A}\rightarrow\hat{\partial}^{A}. \label{ErsRegQExp \end{equation} We can use $q$-exponentials to generate $q$-translations \cite{Chryssomalakos:1993zm}. If we replace the momentum coordinates in the expressions for $q$-exponentials with derivatives, it applies \cite{Carnovale:1999,Majid:1993ud,Wachter:2007A \begin{align} \exp_{q}(x|\partial_{y})\triangleright g(\hspace{0.01in}y) & =g(x\,\bar {\oplus}\,y),\nonumber\\ \overline{\exp}_{q}(x|\hat{\partial}_{y})\,\bar{\triangleright}\,g(\hspace {0.01in}y) & =g(x\oplus y), \label{q-TayN \end{align} an \begin{align} g(\hspace{0.01in}y)\,\bar{\triangleleft}\,\exp_{q}(-\hspace{0.01in \partial_{y}|\hspace{0.01in}x) & =g(\hspace{0.01in}y\,\bar{\oplus }\,x),\nonumber\\ g(\hspace{0.01in}y)\triangleleft\hspace{0.01in}\overline{\exp}_{q (-\hspace{0.01in}\hat{\partial}_{y}|\hspace{0.01in}x) & =g(\hspace {0.01in}y\oplus x). \label{q-TayRecN \end{align} In the case of the three-di\-men\-sion\-al $q$-de\-formed Euclidean space, for example, we can get the following formula for calculating $q$-trans\-la\-tions \cite{Wachter:2004phengl} \begin{align} f(\mathbf{x}\oplus\mathbf{y})= & \sum_{i_{+}=\hspace{0.01in}0}^{\infty \sum_{i_{3}=\hspace{0.01in}0}^{\infty}\sum_{i_{-}=\hspace{0.01in}0}^{\infty }\sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{i_{3}}\frac{(-q^{-1}\lambda \lambda_{+})^{k}}{[[2k]]_{q^{-2}}!!}\frac{(x^{-})^{i_{-}}(x^{3})^{i_{3 -\hspace{0.01in}k}(x^{+})^{i_{+}+\hspace{0.01in}k}\,(\hspace{0.01in}y^{-})^{k} {[[i_{-}]]_{q^{-4}}!\,[[i_{3}-k]]_{q^{-2}}!\,[[i_{+}]]_{q^{-4}}!}\nonumber\\ & \qquad\times\big (D_{q^{-4},\hspace{0.01in}y^{-}}^{i_{-}}D_{q^{-2 ,\hspace{0.01in}y^{3}}^{i_{3}+\hspace{0.01in}k}\hspace{0.01in}D_{q^{-4 ,\hspace{0.01in}y^{+}}^{i_{+}}f\big )(q^{2(k\hspace{0.01in}-\hspace {0.01in}i_{3})}y^{-},q^{-2i_{+}}y^{3}). \end{align} In analogy to the undeformed case, $q$-ex\-ponen\-tials satisfy addition theorems \cite{Majid:1993ud,Schirrmacher:1995,Wachter:2007A}. Concretely, we hav \begin{align} \exp_{q}(\mathbf{x}\,\bar{\oplus}\,\mathbf{y}|\text{i}\mathbf{p}) & =\exp _{q}(\mathbf{x}|\exp_{q}(\hspace{0.01in}\mathbf{y}|\text{i}\mathbf{p )\circledast\text{i}\mathbf{p}),\nonumber\\ \exp_{q}(\text{i}\mathbf{x}|\mathbf{p}\,\bar{\oplus}\,\mathbf{p}^{\prime}) & =\exp_{q}(\mathbf{x}\circledast\exp_{q}(\mathbf{x}|\hspace{0.01in \text{i}\mathbf{p})|\hspace{0.01in}\text{i}\mathbf{p}^{\prime ),\label{AddTheExp \end{align} an \begin{align} \overline{\exp}_{q}(\mathbf{x}\oplus\mathbf{y}|\text{i}\mathbf{p}) & =\overline{\exp}_{q}(\mathbf{x}|\overline{\exp}_{q}(\hspace{0.01in \mathbf{y}|\text{i}\mathbf{p})\circledast\text{i}\mathbf{p}),\nonumber\\ \overline{\exp}_{q}(\text{i}\mathbf{x}|\mathbf{p}\oplus\mathbf{p}^{\prime}) & =\overline{\exp}_{q}(\mathbf{x}\circledast\overline{\exp}_{q}(\mathbf{x |\text{i}\mathbf{p})|\hspace{0.01in}\text{i}\mathbf{p}^{\prime}). \end{align} We can obtain further addition theorems from the above identities by substituting position coordinates with momentum coordinates and vice versa. For a better understanding of the meaning of the two addition theorems in Eq.~(\ref{AddTheExp}), we have given their graphic representation in Fig.~\ref{Fig2} \begin{figure} [ptb] \begin{center} \centerline{\psfig{figure=Fig2.eps,width=1.8827in} \caption{Addition theorems for $q$-exponentials. \label{Fig2 \end{center} \end{figure} The $q$-de\-formed quantum spaces considered so far are so-called braided Hopf algebras \cite{Majid:1996kd}. From this point of view, the two versions of $q$-trans\-lations are nothing else but realizations of two braided co-pro\-ducts $\underline{\Delta}$ and $\underline{\bar{\Delta}}$ on the corresponding commutative coordinate algebras \cite{Wachter:2007A} \begin{align} f(\mathbf{x}\oplus\mathbf{y}) & =((\mathcal{W}^{\hspace{0.01in}-1 \otimes\mathcal{W}^{\hspace{0.01in}-1})\circ\underline{\Delta})(\mathcal{W (f)),\nonumber\\[0.08in] f(\mathbf{x}\,\bar{\oplus}\,\mathbf{y}) & =((\mathcal{W}^{\hspace{0.01in -1}\otimes\mathcal{W}^{-1})\circ\underline{\bar{\Delta}})(\mathcal{W}(f)). \label{KonReaBraCop \end{align} The braided Hopf algebras have braided antipodes $\underline{S}$ and $\underline{\bar{S}}$ as well. We can realize these antipodes on the corresponding commutative algebras, too \begin{align} f(\ominus\,\mathbf{x}) & =(\mathcal{W}^{\hspace{0.01in}-1}\circ\underline {S}\hspace{0.01in})(\mathcal{W}(f)),\nonumber\\ f(\bar{\ominus}\,\mathbf{x}) & =(\mathcal{W}^{\hspace{0.01in}-1 \circ\underline{\bar{S}}\hspace{0.01in})(\mathcal{W}(f)). \label{qInvDef \end{align} In the following, we refer to the operations in Eq.~(\ref{qInvDef})\ as $q$\textit{-in\-ver\-sions}. In the case of the $q$-de\-formed Euclidean space, for example, we have found the following operator representation for $q$-in\-ver\-sions \cite{Wachter:2004phengl} \begin{align} \hat{U}^{-1}f(\ominus\,\mathbf{x})= & \sum_{i=0}^{\infty}(-\hspace {0.01in}q\lambda\lambda_{+})^{i}\,\frac{(x^{+}x^{-})^{i}}{[[2i]]_{q^{-2} !!}\,q^{-2\hat{n}_{+}(\hat{n}_{+}+\hspace{0.01in}\hat{n}_{3})-2\hat{n _{-}(\hat{n}_{-}+\hspace{0.01in}\hat{n}_{3})-\hat{n}_{3}\hat{n}_{3 }\nonumber\\ & \qquad\times D_{q^{-2},\hspace{0.01in}x^{3}}^{2i}\,f(-\hspace {0.01in}q^{2-4i}x^{-},-\hspace{0.01in}q^{1-2i}x^{3},-\hspace{0.01in q^{2-4i}x^{+}). \end{align} The operators $\hat{U}$ and $\hat{U}^{-1}$ act on a commutative function $f(x^{+},x^{3},x^{-})$ as follows \begin{align} \hat{U}f & =\sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty}\left( -\lambda\right) ^{k}\frac{(x^{3})^{2k}}{[[k]]_{q^{-4}}!}\,q^{-2\hat{n _{3}(\hat{n}_{+}+\hspace{0.01in}\hat{n}_{-}+\hspace{0.01in}k)}D_{q^{-4 ,\hspace{0.01in}x^{+}}^{k}D_{q^{-4},\hspace{0.01in}x^{-}}^{k}f,\nonumber\\ \hat{U}^{-1}f & =\sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty \lambda^{k}\hspace{0.01in}\frac{(x^{3})^{2k}}{[[k]]_{q^{4}}!}\,q^{2\hat{n _{3}(\hat{n}_{+}+\hspace{0.01in}\hat{n}_{-}+\hspace{0.01in}k)}D_{q^{4 ,\hspace{0.01in}x^{+}}^{k}D_{q^{4},\hspace{0.01in}x^{-}}^{k}f. \end{align} The braided co-products and braided antipodes satisfy the axioms (also see Ref.~\cite{Majid:1996kd} \begin{align} m\circ(\underline{S}\otimes\operatorname*{id})\circ\underline{\Delta} & =m\circ(\operatorname*{id}\otimes\,\underline{S}\hspace{0.01in})\circ \underline{\Delta}=\underline{\varepsilon},\nonumber\\ m\circ(\underline{\bar{S}}\otimes\operatorname*{id})\circ\underline {\bar{\Delta}} & =m\circ(\operatorname*{id}\otimes\,\underline{\bar{S }\hspace{0.01in})\circ\underline{\bar{\Delta}}=\underline{\bar{\varepsilon}}, \label{HopfVerAnfN \end{align} an \begin{align} (\operatorname*{id}\otimes\,\underline{\varepsilon})\circ\underline{\Delta} & =\operatorname*{id}=(\underline{\varepsilon}\otimes\operatorname*{id )\circ\underline{\Delta},\nonumber\\ (\operatorname*{id}\otimes\,\underline{\bar{\varepsilon}})\circ\underline {\bar{\Delta}} & =\operatorname*{id}=(\underline{\bar{\varepsilon} \otimes\operatorname*{id})\circ\underline{\bar{\Delta}}. \label{HopfAxi2 \end{align} In the identities above, we denote the operation of multiplication on the braided Hopf algebra by $m$. The co-units $\underline{\varepsilon ,\underline{\bar{\varepsilon}}$ of the two braided Hopf structures are both linear mappings vanishing on the coordinate generators \begin{equation} \varepsilon(X^{i})=\underline{\bar{\varepsilon}}(X^{i})=0. \end{equation} For this reason, we can realize the co-units $\underline{\varepsilon}$ and $\underline{\bar{\varepsilon}}$ on a commutative coordinate algebra as follows \begin{equation} \underline{\varepsilon}(\mathcal{W}(f))=\underline{\bar{\varepsilon }(\mathcal{W}(f))=\left. f(\mathbf{x})\right\vert _{x\hspace{0.01in =\hspace{0.01in}0}=f(0). \label{ReaVerZopNeuEleKomAlg \end{equation} Next, we translate the Hopf algebra axioms in Eqs.~(\ref{HopfVerAnfN}) and (\ref{HopfAxi2}) into corresponding rules for $q$-translations and $q$-inversions \cite{Wachter:2007A}, i.~e \begin{align} f((\ominus\,\mathbf{x})\oplus\mathbf{x}) & =f(\mathbf{x}\oplus (\ominus\,\mathbf{x}))=f(0),\nonumber\\ f((\bar{\ominus}\,\mathbf{x})\,\bar{\oplus}\,\mathbf{x}) & =f(\mathbf{x \,\bar{\oplus}\,(\bar{\ominus}\,\mathbf{x}))=f(0), \label{qAddN \end{align} an \begin{align} f(\mathbf{x}\oplus\mathbf{y})|_{y\hspace{0.01in}=\hspace{0.01in}0} & =f(\mathbf{x})=f(\mathbf{y}\oplus\mathbf{x})|_{y\hspace{0.01in}=\hspace {0.01in}0},\nonumber\\ f(\mathbf{x}\,\bar{\oplus}\,\mathbf{y})|_{y\hspace{0.01in}=\hspace{0.01in}0} & =f(\mathbf{x})=f(\mathbf{y}\,\bar{\oplus}\,\mathbf{x})|_{y\hspace {0.01in}=\hspace{0.01in}0}. \label{qNeuEle \end{align} Using $q$-in\-ver\-sions, we are also able to introduce inverse $q -exponentials \begin{equation} \exp_{q}(\bar{\ominus}\,\mathbf{x}|\text{i}\mathbf{p})=\exp_{q}(\text{i \mathbf{x}|\text{{}}\bar{\ominus}\,\mathbf{p}).\label{InvExpAlgDefKom \end{equation} Due to the addition theorems and the normalization conditions of our $q$-ex\-ponen\-tials, the following applies \begin{equation} \exp_{q}(\text{i}\mathbf{x}\circledast\exp_{q}(\bar{\ominus}\,\mathbf{x |\hspace{0.01in}\text{i}\mathbf{p})\circledast\mathbf{p})=\exp_{q (\mathbf{x}\,\bar{\oplus}\,(\bar{\ominus}\,\mathbf{x})|\hspace{0.01in \text{i}\mathbf{p})=\exp_{q}(\mathbf{x}|\text{i}\mathbf{p})|_{x=0}=1. \end{equation} For a better understanding of these identities, we have given their graphic representation in Fig.~\ref{Fig3}. You find some explanations of this sort of graphical calculations in Ref.~\cite{Majid:2002kd}. \begin{figure} [ptb] \centerline{\psfig{figure=Fig3.eps,width=2.5754in} \caption{Invertibility of $q$-exponentials. \label{Fig3 \end{figure} The conjugate $q$-exponentials $\overline{\exp}_{q}$ are subject to similar rules obtained from the above identities by using the following substitutions \begin{equation} \exp_{q}\rightarrow\overline{\exp}_{q},\qquad\bar{\oplus}\,\rightarrow \,\oplus,\qquad\bar{\ominus}\,\rightarrow\,\ominus. \end{equation} Next, we describe another way of obtaining $q$-ex\-ponentials. We exchange the two tensor factors of a $q$-ex\-ponential using the inverse of the so-called universal R-matrix [also see the graphic representation in Fig.~\ref{Fig4}] \begin{align} \exp_{q}^{\ast}(\text{i}\mathbf{p}|\hspace{0.01in}\mathbf{x}) & =\tau \circ\lbrack(\mathcal{R}_{[2]}^{-1}\otimes\mathcal{R}_{[1]}^{-1 )\triangleright\exp_{q}(\text{i}\mathbf{x}|\hspace{-0.03in}\ominus \hspace{-0.01in}\mathbf{p})],\nonumber\\ \exp_{q}^{\ast}(\mathbf{x}|\text{i}\mathbf{p}) & =\tau\circ\lbrack (\mathcal{R}_{[2]}^{-1}\otimes\mathcal{R}_{[1]}^{-1})\triangleright\exp _{q}(\ominus\hspace{0.02in}\mathbf{p}|\hspace{0.01in}\text{i}\mathbf{x})]. \label{DuaExp2 \end{align} In the expressions above, $\tau$ denotes the ordinary twist operator. One can show that the new $q$-ex\-ponentials satisfy the following eigenvalue equations (see Fig.~\ref{Fig4}) \begin{align} \exp_{q}^{\ast}(\text{i}\mathbf{p}|\hspace{0.01in}\mathbf{x})\triangleleft \partial^{A} & =\text{i}p^{A}\circledast\exp_{q}^{\ast}(\text{i \mathbf{p}|\hspace{0.01in}\mathbf{x}),\nonumber\\ \partial^{A}\,\bar{\triangleright}\,\exp_{q}^{\ast}(\mathbf{x}|\text{i ^{-1}\mathbf{p}) & =\exp_{q}^{\ast}(\mathbf{x}|\text{i}^{-1}\mathbf{p )\circledast\text{i}p^{A}. \label{EigGleExpQueAbl \end{align} Similar considerations apply to the conjugate $q$-exponentials. We only need to modify Eqs.~(\ref{DuaExp2}) and (\ref{EigGleExpQueAbl}) by performing the following substitutions \begin{gather} \exp_{q}^{\ast}\rightarrow\overline{\exp}_{q}^{\ast},\qquad\mathcal{R _{[2]}^{-1}\otimes\mathcal{R}_{[1]}^{-1}\rightarrow\mathcal{R}_{[1] \otimes\mathcal{R}_{[2]},\qquad\ominus\,\rightarrow\,\bar{\ominus},\nonumber\\ \bar{\triangleright}\,\rightarrow\,\triangleright,\qquad\triangleleft \,\rightarrow\,\bar{\triangleleft},\qquad\partial^{A}\rightarrow\hat{\partial }^{A}. \end{gather} The $q$-exponentials in Eq.~(\ref{DuaExp2}) are related to the conjugate $q$-exponentials. To see this, we rewrite the eigenvalue equations in (\ref{EigGleExpQueAbl}) by using the identity $\hat{\partial}^{A =q^{6}\partial^{A}$ as follows \begin{align} \exp_{q}^{\ast}(\text{i}\mathbf{p}|\hspace{0.01in}\mathbf{x})\triangleleft \hat{\partial}^{A} & =\text{i}q^{6}p^{A}\circledast\exp_{q}^{\ast (\text{i}\mathbf{p}|\hspace{0.01in}\mathbf{x}),\nonumber\\ \hat{\partial}^{A}\,\bar{\triangleright}\,\exp_{q}^{\ast}(\mathbf{x |\text{i}^{-1}\mathbf{p}) & =\exp_{q}^{\ast}(\mathbf{x}|\text{i ^{-1}\mathbf{p})\circledast\text{i}q^{6}p^{A}. \end{align} These are the eigenvalue equations for $\overline{\exp}_{q}( i$^{-1}q^{6}\mathbf{p}|\mathbf{x})$ and $\overline{\exp}_{q}(\mathbf{x |$i$q^{6}\mathbf{p})$, so the following identifications are valid \begin{equation} \exp_{q}^{\ast}(\text{i}\mathbf{p}|\mathbf{x})=\overline{\exp}_{q (\text{i}^{-1}q^{6}\mathbf{p}|\mathbf{x}),\qquad\exp_{q}^{\ast}(\mathbf{x |\hspace{0.01in}\text{i}^{-1}\mathbf{p})=\overline{\exp}_{q}(\mathbf{x |\text{i}q^{6}\mathbf{p}). \label{IdeSteExpEpxKon1 \end{equation} \begin{figure} [ptb] \centerline{\psfig{figure=Fig4.eps,width=1.817in} \caption{Eigenvalue equation of twisted $q$-exponential. \label{Fig4 \end{figure} For the sake of completeness, we also write down how the $q$-ex\-ponentials of $q$-de\-formed Euclidean space behave under quantum space conjugation \begin{align} \overline{\exp_{q}(\mathbf{x}|\text{i}\mathbf{p})} & =\exp_{q}(\text{i ^{-1}\mathbf{p}|\mathbf{x}), & \overline{\overline{\exp}_{q}(\mathbf{x |\text{i}\mathbf{p})} & =\overline{\exp}_{q}(\text{i}^{-1}\mathbf{p |\mathbf{x}),\nonumber\\ \overline{\exp_{q}^{\ast}(\text{i}\mathbf{p}|\mathbf{x})} & =\exp_{q}^{\ast }(\mathbf{x}|\text{i}^{-1}\mathbf{p}), & \overline{\overline{\exp}_{q}^{\ast }(\text{i}\mathbf{p}|\mathbf{x})} & =\overline{\exp}_{q}^{\ast (\mathbf{x}|\text{i}^{-1}\mathbf{p}). \label{KonEigExpQua \end{align} \section{Hamilton operator for a free particle\label{KapHerSchGle}} Since the $q$-de\-formed Hamilton operator of a free nonrelativistic particle is supposed to be invariant under rotations, it must behave like a scalar concerning the action of the Hopf algebra $\mathcal{U}_{q (\operatorname*{su}\nolimits_{2})$. For this reason, we choose the following expression as Hamilton operator for a free nonrelativistic particle with mass $m$ \begin{equation} H_{0}=-(2\hspace{0.01in}m)^{-1}g_{AB}\hspace{0.01in}\partial^{A}\partial^{B}=-(2\hspace {0.01in}m)^{-1}\partial^{A}\partial_{A}. \label{Ham2 \end{equation} Due to its definition, the Hamilton operator $H_{0}$ is a central element of the algebra of $q$-de\-formed partial derivatives \begin{equation} \lbrack H_{0},\partial^{A}]=0,\quad A\in\{+,3,-\}. \label{ComHP \end{equation} The conjugation properties of the partial derivatives imply that $H_{0}$ is invariant under conjugation [cf. Eq.~(\ref{KonAbl}) of Chap.~\ref{KapParDer}] \begin{equation} \overline{H_{0}}=H_{0}. \label{RelBedHamFre \end{equation} We mention that $H_{0}$ results from the low-energy limit of the following energy-momentum relation \begin{equation} E_{\mathbf{p}}^{\hspace{0.01in}2}=c^{2}(\hspace{0.01in}p^{A}p_{A}+(m\hspace{0.01in}c)^{2}). \end{equation} You can see this by the following calculation \begin{align} E_{\mathbf{p}} & =c\hspace{0.01in}\sqrt{p^{A}p_{A}+(m\hspace{0.01in}c)^{2}}=m\hspace {0.01in}c^{2}\sqrt{1+(m\hspace{0.01in}c)^{-2}p^{A}p_{A}}\nonumber\\ & =m\hspace{0.01in}c^{2}(1+2^{-1}(m\hspace{0.01in}c)^{-2}p^{A}p_{A +\ldots)\nonumber\\ & =m\hspace{0.01in}c^{2}+(2\hspace{0.01in}m)^{-1}p^{A}p_{A}+\ldots \label{LimEneImpBez \end{align} The second term of the last expression in Eq.~(\ref{LimEneImpBez}) gives $H_{0}$ if we replace the momentum variable $p^{A}$ with the operator i$^{-1}\partial^{A}$. \section{Solutions to the free Schr\"{o}\-dinger equations\label{LoeSchGleKap}} In Ref.~\cite{Wachter:2020A},\ we have derived Schr\"{o}\-dinger equations for the three-di\-men\-sion\-al $q$-de\-formed Euclidean space $\mathbb{R}_{q}^{3}$. Now we want to find solutions to these Schr\"{o}\-dinger equations with the free Hamilton operator given by the expression in Eq.~(\ref{Ham2}) of the previous chapter \begin{align} \text{i}\partial_{t}\triangleright\phi_{R}(\mathbf{x},t) & =H_{0 \triangleright\phi_{R}(\mathbf{x},t),\nonumber\\ \phi_{L}^{\ast}(\mathbf{x},t)\triangleleft\partial_{t}\text{i} & =\phi _{L}^{\ast}(\mathbf{x},t)\triangleleft H_{0}. \label{FreParSch1N \end{align} Due to Eq.~(\ref{ComHP}) of the previous chapter, the free Hamilton operator commutes with the momentum operator i$^{-1}\partial_{A}$. So we seek solutions that are eigenfunctions of the momentum operator ($A\in\{+,3,-\}$) \begin{align} \text{i}^{-1}\partial_{A}\triangleright u_{\hspace{0.01in}\mathbf{p }(\mathbf{x},t) & =u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t)\circledast p_{A},\nonumber\\ (u^{\ast})_{\mathbf{p}}(\mathbf{x},t)\triangleleft\partial_{A}\text{i}^{-1} & =p_{A}\circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x},t). \label{ImpEigWelSol \end{align} Resulting from these identities, we can write the Schr\"{o}\-dinger equation for the wave function $u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t)$ or $(u^{\ast})_{\mathbf{p }(\mathbf{x},t)$ as follows:\footnote{For the squared momentum holds $\mathbf{p}^{2}=p^{A}\hspace{-0.01in}\circledast p_{A}$. \begin{align} \text{i}\partial_{t}\triangleright u_{\hspace{0.01in}\mathbf{p} (\mathbf{x},t) & =H_{0}\triangleright u_{\hspace{0.01in}\mathbf{p }(\mathbf{x},t)=-(2\hspace{0.01in}m)^{-1}\hspace{0.01in}\partial^{A \partial_{A}\triangleright u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x ,t)\nonumber\\ & =u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t)\circledast\mathbf{p ^{2}(2\hspace{0.01in}m)^{-1},\label{FreSchGlImp0}\\[0.08in] (u^{\ast})_{\mathbf{p}}(\mathbf{x},t)\triangleleft\partial_{t}\text{i} & =(u^{\ast})_{\mathbf{p}}(\mathbf{x},t)\triangleleft H_{0}=-(u^{\ast })_{\mathbf{p}}(\mathbf{x},t)\triangleleft\partial^{A}\partial_{A}(2\hspace{0.01in m)^{-1}\nonumber\\ & =(2\hspace{0.01in}m)^{-1}\hspace{0.01in}\mathbf{p}^{2}\circledast(u^{\ast })_{\mathbf{p}}(\mathbf{x},t). \label{FreSchGlImp1 \end{align} The equations above show us that $u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t)$ and $(u^{\ast})_{\mathbf{p}}(\mathbf{x},t)$ are eigenfunctions of the energy operator as well. To find expressions for the functions $u_{\hspace{0.01in}\mathbf{p }(\mathbf{x},t)$ and $(u^{\ast})_{\mathbf{p}}(\mathbf{x},t)$, we consider the $q$-de\-formed momentum eigenfunctions introduced in Ref.~\cite{Wachter:2019A . These momentum eigenfunctions satisfy the following eigenvalue equations \begin{align} \text{i}^{-1}\partial^{A}\triangleright u_{\hspace{0.01in}\mathbf{p }(\mathbf{x}) & =u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x})\circledast p^{A}, & u^{\mathbf{p}}(\mathbf{x})\,\bar{\triangleleft}\,\partial^{A}\hspace {0.01in}\text{i}^{-1} & =p^{A}\circledast u^{\mathbf{p}}(\mathbf{x ),\nonumber\\ \text{i}^{-1}\hat{\partial}^{A}\,\bar{\triangleright}\,\bar{u}_{\hspace {0.01in}\mathbf{p}}(\mathbf{x}) & =\bar{u}_{\hspace{0.01in}\mathbf{p }(\mathbf{x})\circledast p^{A}, & \bar{u}^{\mathbf{p}}(\mathbf{x )\triangleleft\hat{\partial}^{A}\hspace{0.01in}\text{i}^{-1} & =p^{A}\circledast\bar{u}^{\mathbf{p}}(\mathbf{x}). \label{EigGleImpOpeImpEigFkt0 \end{align} Since the $q$-exponentials of Chap.~\ref{KapExp} are eigenfunctions of $q$-de\-formed partial derivatives, the $q$-de\-formed momentum eigenfunctions can take on the following form \begin{align} u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x}) & =\operatorname*{vol \nolimits^{-1/2}\exp_{q}(\mathbf{x}|\text{i}\mathbf{p}), & u^{\mathbf{p }(\mathbf{x}) & =\operatorname*{vol}\nolimits^{-1/2}\exp_{q}(\text{i ^{-1}\mathbf{p}|\hspace{0.01in}\mathbf{x}),\nonumber\\ \bar{u}_{\hspace{0.01in}\mathbf{p}}(\mathbf{x}) & =\operatorname*{vol \nolimits^{-1/2}\overline{\exp}_{q}(\mathbf{x}|\text{i}\mathbf{p}), & \bar {u}^{\mathbf{p}}(\mathbf{x}) & =\operatorname*{vol}\nolimits^{-1/2 \overline{\exp}_{q}(\text{i}^{-1}\mathbf{p}|\hspace{0.01in}\mathbf{x}). \label{ImpEigFktqDef \end{align} The volume element $\operatorname*{vol}$ is defined by the expression in Eq.~(\ref{VolEleDef}) of the next chapter. We can also introduce dual\ momentum eigenfunctions [cf. Eq.~(\ref{DuaExp2}) of Chap.~\ref{KapExp}] \begin{align} (u^{\ast})_{\mathbf{p}}(\mathbf{x}) & =\operatorname*{vol}\nolimits^{-1/2 \exp_{q}^{\ast}(\text{i}\mathbf{p}|\hspace{0.01in}\mathbf{x}), & (u^{\ast })^{\mathbf{p}}(\mathbf{x}) & =\operatorname*{vol}\nolimits^{-1/2}\exp _{q}^{\ast}(\mathbf{x}|\text{i}^{-1}\mathbf{p}),\nonumber\\ (\bar{u}^{\ast})_{\mathbf{p}}(\mathbf{x}) & =\operatorname*{vol \nolimits^{-1/2}\overline{\exp}_{q}^{\ast}(\text{i}\mathbf{p}|\hspace {0.01in}\mathbf{x}), & (\bar{u}^{\ast})^{\mathbf{p}}(\mathbf{x}) & =\operatorname*{vol}\nolimits^{-1/2}\overline{\exp}_{q}^{\ast}(\mathbf{x |\text{i}^{-1}\mathbf{p}). \label{DefDuaImpEigFktWdh \end{align} The corresponding eigenvalue equations are given by [cf. Eq.~(\ref{EigGleExpQueAbl}) of Chap.~\ref{KapExp} \begin{align} (u^{\ast})_{\mathbf{p}}(\mathbf{x})\triangleleft\partial^{A}\hspace {0.01in}\text{i}^{-1}\hspace{-0.01in} & =p^{A}\circledast(u^{\ast )_{\mathbf{p}}(\mathbf{x}),\nonumber\\ \text{i}^{-1}\partial^{A}\,\bar{\triangleright}\,(u^{\ast})^{\mathbf{p }(\mathbf{x}) & =(u^{\ast})^{\mathbf{p}}(\mathbf{x})\circledast p^{A}, \label{ImpEigFktqDef2 \end{align} o \begin{align} (\bar{u}^{\ast})_{\mathbf{p}}(\mathbf{x})\,\bar{\triangleleft}\,\hat{\partial }^{A}\hspace{0.01in}\text{i}^{-1}\hspace{-0.01in} & =p^{A}\circledast (\bar{u}^{\ast})_{\mathbf{p}}(\mathbf{x}),\nonumber\\ \text{i}^{-1}\hat{\partial}^{A}\triangleright(\bar{u}^{\ast})^{\mathbf{p }(\mathbf{x}) & =(\bar{u}^{\ast})^{\mathbf{p}}(\mathbf{x})\circledast p^{A}. \end{align} In what follows, we restrict our considerations to the momentum eigenfunctions $u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x})$ and $(u^{\ast})_{\mathbf{p }(\mathbf{x})$.$\ $We can obtain the results for the other momentum eigenfunctions by simple substitutions specified at the end of this chapter. We have shown in Ref.~\cite{Wachter:2020A} that the time evolution operator for the quantum space $\mathbb{R}_{q}^{3}$ is of the same form as in the undeformed case. For this reason, we get solutions to our $q$-de\-formed Schr\"{o}\-din\-ger equations by applying the operators $\exp(-$i$tH_{0})$ and $\exp($i$tH_{0})$ to time-independent functions $\phi_{R}(\mathbf{x},0)$ and $\phi_{L}^{\ast}(\mathbf{x},0)$ \begin{align} \phi_{R}(\mathbf{x},t) & =\exp(-\text{i}tH_{0})\triangleright\phi _{R}(\mathbf{x},0),\nonumber\\ \phi_{L}^{\ast}(\mathbf{x},t) & =\phi_{L}^{\ast}(\mathbf{x},0)\triangleleft \exp(\text{i}H_{0}t). \label{AnwZeitEnt \end{align} In the same way, we can obtain plane wave solutions to our Schr\"{o}\-dinger equations from the momentum eigenfunctions $u_{\hspace{0.01in}\mathbf{p }(\mathbf{x})$ and $(u^{\ast})_{\mathbf{p}}(\mathbf{x})$, i.~e \begin{align} u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t) & =\exp(-\text{i}tH_{0 )\triangleright u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x})=u_{\hspace {0.01in}\mathbf{p}}(\mathbf{x})\circledast\exp(-\text{i}t\hspace{0.01in \mathbf{p}^{2}(2\hspace{0.01in}m)^{-1})\nonumber\\ & =\operatorname*{vol}\nolimits^{-1/2}\exp_{q}(\mathbf{x}|\text{i \mathbf{p})\circledast\exp(-\text{i}t\hspace{0.01in}\mathbf{p}^{2 (2\hspace{0.01in}m)^{-1}), \label{ConPlaWav0 \end{align} an \begin{align} (u^{\ast})_{\mathbf{p}}(\mathbf{x},t) & =(u^{\ast})_{\mathbf{p} (\mathbf{x})\triangleleft\exp(\text{i}H_{0}t)=\exp(\text{i}t\hspace {0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1})\circledast(u^{\ast })_{\mathbf{p}}(\mathbf{x})\nonumber\\ & =\operatorname*{vol}\nolimits^{-1/2}\exp(\text{i}t\hspace{0.01in \mathbf{p}^{2}(2\hspace{0.01in}m)^{-1})\circledast\exp_{q}^{\ast (\text{i}\mathbf{p|\hspace{0.01in}x}). \label{ConPlaWav \end{align} The momentum eigenfunctions are multiplied by a time-dependent phase factor if the time evolution operator acts on them. This phase factor is given b \begin{equation} \exp(\pm\hspace{0.01in}\text{i}t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1 )=\sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty}\frac{1}{k!}\left( \pm\hspace{0.01in}\text{i}t(2\hspace{0.01in}m)^{-1}\right) ^{k}\mathbf{p}^{2k}, \label{PhaFac \end{equation} where powers of $\mathbf{p}^{2}(=g^{AB}\hspace{0.02in}p_{A}\circledast p_{B})$ are calculated by using the star-prod\-uct \begin{equation} \mathbf{p}^{2k}=\hspace{0.01in}\overset{k-\text{times}}{\overbrace {\mathbf{p}^{2}\circledast\ldots\circledast\mathbf{p}^{2}}}\hspace {0.01in}=\sum_{l\hspace{0.01in}=\hspace{0.01in}0}^{k}\hspace{0.01in (C_{q})_{l}^{k}\,(\hspace{0.01in}p_{-})^{k-l}(\hspace{0.01in}p_{3 )^{2l}(\hspace{0.01in}p_{+})^{k-l}, \label{EntPotP \end{equation} The coefficients $(C_{q})_{l}^{k}$ in the series expansion above satisfy the following recurrence relation ($\lambda_{+}=q+q^{-1}$) \begin{equation} (C_{q})_{l}^{k}=-\lambda_{+}\hspace{0.02in}q^{4l}(C_{q})_{l}^{k\hspace {0.01in}-1}\hspace{-0.01in}+q^{-2}(C_{q})_{l\hspace{0.01in}-1}^{k\hspace {0.01in}-1}. \end{equation} As you can verify by inserting, this recurrence relation has the following solution \begin{equation} (C_{q})_{l}^{k}=q^{-2l}(-\lambda_{+})^{k-l \genfrac{[}{]}{0pt}{}{k}{l _{q^{4}}. \end{equation} The $q$-de\-formed binomial coefficients are defined in complete analogy to the undeformed case \begin{equation \genfrac{[}{]}{0pt}{}{n}{k _{q}=\frac{[[\hspace{0.01in}n]]_{q}!}{[[\hspace{0.01in}n-k]]_{q !\hspace{0.01in}[[k]]_{q}!}. \label{qBinKoeBas \end{equation} Combining our results, we finally get \begin{gather} \exp(\pm\hspace{0.01in}\text{i}t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1 )=\nonumber\\ =\sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty}\frac{1}{k!}\left( \frac{\pm\hspace{0.01in}\text{i}t}{2\hspace{0.01in}m}\right) ^{k}\sum_{l\hspace {0.01in}=\hspace{0.01in}0}^{k}q^{-2l}(-\lambda_{+})^{k-l \genfrac{[}{]}{0pt}{}{k}{l _{q^{4}}(\hspace{0.01in}p_{-})^{k-l}(\hspace{0.01in}p_{3})^{2l}(\hspace {0.01in}p_{+})^{k-l}\nonumber\\ =\sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty}\frac{1}{k!}\left( \frac{\mp\hspace{0.01in}\text{i}t\lambda_{+}}{2\hspace{0.01in}m}\hspace{0.01in}p_{- p_{+}\right) ^{k}\frac{1}{((\hspace{0.01in}p_{3})^{2}/(-\hspace{0.01in q^{2}\lambda_{+}\hspace{0.01in}p_{-}p_{+});q^{4})_{k}}. \end{gather} The second identity is a consequence of Heine's binomial formula \cite{Kac:2002eb} \begin{equation} \frac{1}{(z\hspace{0.01in};q)_{k}}=\frac{1}{(1-z)(1-z\hspace{0.01in q)\ldots(1-z\hspace{0.01in}q^{k-1})}=\sum_{l\hspace{0.01in}=\hspace{0.01in 0}^{k \genfrac{[}{]}{0pt}{}{k}{l _{q}\hspace{0.01in}z^{l}. \label{HeiBinFor \end{equation} Due to Eqs.~(\ref{ConPlaWav0}) and (\ref{ConPlaWav}), we must calculate the star-prod\-uct of the time-dependent phase factor and the time-independent momentum eigenfunction in the end. To get an expression for $u_{\hspace {0.01in}\mathbf{p}}(\mathbf{x},t)$, for example, we proceed as follows \begin{align} & \hspace{0.01in}\mathbf{p}^{2k}\circledast(\hspace{0.01in}p_{-})^{n_{- }(\hspace{0.01in}p_{3})^{n_{3}}(\hspace{0.01in}p_{+})^{n_{+}}=\,(\hspace {0.01in}p_{-})^{n_{-}}\hspace{-0.01in}\circledast\mathbf{p}^{2k \circledast(\hspace{0.01in}p_{3})^{n_{3}}(\hspace{0.01in}p_{+})^{n_{+ }\nonumber\\ & \qquad=\,\sum_{l\hspace{0.01in}=\hspace{0.01in}0}^{k}\hspace{0.01in (C_{q})_{l}^{k}\,(\hspace{0.01in}p_{-})^{n_{-}+\hspace{0.01in}k-l (\hspace{0.01in}p_{3})^{2l}(\hspace{0.01in}p_{+})^{k-l}\circledast (\hspace{0.01in}p_{3})^{n_{3}}(\hspace{0.01in}p_{+})^{n_{+}}\nonumber\\ & \qquad=\,\sum_{l\hspace{0.01in}=\hspace{0.01in}0}^{k}\hspace{0.01in q^{2n_{3}(k-l)}(C_{q})_{l}^{k}\,(\hspace{0.01in}p_{-})^{n_{-}+\hspace {0.01in}k-l}(\hspace{0.01in}p_{3})^{n_{3}+2l}(\hspace{0.01in}p_{+ )^{n_{+}+\hspace{0.01in}k-l}. \label{ZwiStePro \end{align} In the first step of the calculation above, we have used the fact that $\mathbf{p}^{2}$ is a central element of the momentum algebra. In the second step, we have inserted the expression given in Eq.~(\ref{EntPotP}). The last step follows from Eq.~(\ref{StaProForExp}) in Chap.~\ref{KapQuaZeiEle} if we take into account that $p_{A}=g_{AB}\hspace{0.01in}p^{B}$. With the result of Eq.~(\ref{ZwiStePro}), we obtain from Eqs.~(\ref{ConPlaWav0}) and (\ref{PhaFac}) together with Eq.~(\ref{ExpEukExp}) of Chap.~\ref{KapExp} the following expression for $u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t)$ \begin{align} u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t)= & \operatorname*{vol \nolimits^{-1/2}\sum_{\underline{n}\hspace{0.01in}=\hspace{0.01in}0}^{\infty }\sum_{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty}\sum_{l\hspace {0.01in}=\hspace{0.01in}0}^{k}\frac{(-\lambda_{+})^{k-l}q^{-2l+2n_{3}(k-l) }{k!\,[[\hspace{0.01in}n_{+}]]_{q^{4}}!\,[[\hspace{0.01in}n_{3}]]_{q^{2 }!\,[[\hspace{0.01in}n_{-}]]_{q^{4}}!}\ \genfrac{[}{]}{0pt}{}{k}{l _{q^{4}}\nonumber\\ & \qquad\quad\times(2\hspace{0.01in}m)^{-k}(\text{i}t)^{k}(x^{+})^{n_{+ }(x^{3})^{n_{3}}(x^{-})^{n_{-}}\nonumber\\ & \qquad\quad\times(\text{i}p_{-})^{n_{-}+\hspace{0.01in}k-l}(\text{i p_{3})^{n_{3}+2l}(\text{i}p_{+})^{n_{+}+\hspace{0.01in}k-l}. \label{ExpForEbeWel \end{align} The time-dependent phase factor depends on $\mathbf{p}^{2}$. Thus the phase factor is a central element of the $q$-de\-formed momentum algebra. With this insight, we can show that our plane wave solutions are momentum eigenfunctions as well [also see Eq.~(\ref{ImpEigWelSol})] \begin{align} \text{i}^{-1}\partial_{A}\triangleright u_{\hspace{0.01in}\mathbf{p }(\mathbf{x},t) & =\text{i}^{-1}\partial_{A}\triangleright u_{\hspace {0.01in}\mathbf{p}}(\mathbf{x})\circledast\exp\left( -\text{i}t\hspace {0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1}\right) \nonumber\\ & =u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x})\circledast p_{A}\circledast \exp\left( -\text{i}t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in m)^{-1}\right) \nonumber\\ & =u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x})\circledast\exp\left( -\text{i}t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1}\right) \circledast p_{A}=u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t)\circledast p_{A}. \end{align} By quantum space conjugation, you can obtain further $q$-de\-formed Schr\"{o}\-dinger equations from Eq.~(\ref{FreParSch1N}), i.~e. [also see Eq.~(\ref{RegConAbl}) of Chap.~\ref{KapParDer} \begin{align} \phi_{L}(\mathbf{x},t)\,\bar{\triangleleft}\,\partial_{t}\text{i} & =\phi_{L}(\mathbf{x},t)\,\bar{\triangleleft}\,H_{0},\nonumber\\ \text{i}\partial_{t}\,\bar{\triangleright}\,\phi_{R}^{\ast}(\mathbf{x},t) & =H_{0}\,\bar{\triangleright}\,\phi_{R}^{\ast}(\mathbf{x},t) \label{KonSchrGle \end{align} wit \begin{equation} \overline{\phi_{R}(\mathbf{x},t)}=\phi_{L}(\mathbf{x},t),\qquad\overline {\phi_{L}^{\ast}(\mathbf{x},t)}=\phi_{R}^{\ast}(\mathbf{x},t). \label{VerKonWelFkt \end{equation} Accordingly, the quantum space conjugates of the plane waves $u_{\hspace {0.01in}\mathbf{p}}(\mathbf{x},t)$ and $(u^{\ast})_{\mathbf{p}}(\mathbf{x},t)$ are plane wave solutions to the $q$-de\-formed Schr\"{o}\-dinger equations given in Eq.~(\ref{KonSchrGle}), i.~e \begin{align} u^{\mathbf{p}}(\mathbf{x},t)\,\bar{\triangleleft}\,\partial_{t}\text{i} & =u^{\mathbf{p}}(\mathbf{x},t)\,\bar{\triangleleft}\,H_{0}=(2\hspace {0.01in}m)^{-1}\mathbf{p}^{2}\circledast u^{\mathbf{p}}(\mathbf{x ,t),\nonumber\\ \text{i}\partial_{t}\,\bar{\triangleright}\,(u^{\ast})^{\mathbf{p} (\mathbf{x},t) & =H_{0}\,\bar{\triangleright}\,(u^{\ast})^{\mathbf{p }(\mathbf{x},t)=(u^{\ast})^{\mathbf{p}}(\mathbf{x},t)\circledast\mathbf{p ^{2}(2\hspace{0.01in}m)^{-1 \end{align} wit \begin{equation} u^{\mathbf{p}}(\mathbf{x},t)=\overline{u_{\hspace{0.01in}\mathbf{p }(\mathbf{x},t)},\qquad(u^{\ast})^{\mathbf{p}}(\mathbf{x},t)=\overline {(u^{\ast})_{\mathbf{p}}(\mathbf{x},t)}. \label{KonEbeWel \end{equation} The new plane wave solutions are subject to the identitie \begin{align} u^{\mathbf{p}}(\mathbf{x},t) & =u^{\mathbf{p}}(\mathbf{x})\,\bar {\triangleleft}\,\exp(\text{i}H_{0}t)\nonumber\\ & =\exp\left( \text{i}t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in m)^{-1}\right) \circledast u^{\mathbf{p}}(\mathbf{x})\nonumber\\ & =\exp\left( \text{i}t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in m)^{-1}\right) \circledast\exp_{q}(\text{i}^{-1}\mathbf{p}|\hspace {0.01in}\mathbf{x})\operatorname*{vol}\nolimits^{-1/2 \end{align} an \begin{align} (u^{\ast})^{\mathbf{p}}(\mathbf{x},t) & =\exp(-\text{i}tH_{0})\,\bar {\triangleright}\,(u^{\ast})^{\mathbf{p}}(\mathbf{x})\nonumber\\ & =(u^{\ast})^{\mathbf{p}}(\mathbf{x})\circledast\exp\left( -\text{i t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1}\right) \nonumber\\ & =\operatorname*{vol}\nolimits^{-1/2}\exp_{q}^{\ast}(\mathbf{x |\text{i}^{-1}\mathbf{p})\circledast\exp\left( -\text{i}t\hspace {0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1}\right) . \end{align} Last not but least, we write down an explicit formula for $u^{\mathbf{p }(\mathbf{x},t)$ \begin{align} u^{\mathbf{p}}(\mathbf{x},t)= & \operatorname*{vol}\nolimits^{-1/2 \sum_{\underline{n}\hspace{0.01in}=\hspace{0.01in}0}^{\infty}\sum _{k\hspace{0.01in}=\hspace{0.01in}0}^{\infty}\sum_{l\hspace{0.01in =\hspace{0.01in}0}^{k}\frac{(-\lambda_{+})^{k-l}q^{-2l+2n_{3}(k-l) }{k!\,[[\hspace{0.01in}n_{+}]]_{q^{4}}!\,[[\hspace{0.01in}n_{3}]]_{q^{2 }!\,[[\hspace{0.01in}n_{-}]]_{q^{4}}!}\ \genfrac{[}{]}{0pt}{}{k}{l _{q^{4}}\nonumber\\ & \qquad\quad\times(2\hspace{0.01in}m)^{-k}(\text{i}^{-1}p_{-})^{n_{- +\hspace{0.01in}k-l}(\text{i}^{-1}p_{3})^{n_{3}+2l}(\text{i}^{-1}p_{+ )^{n_{+}+\hspace{0.01in}k-l}\nonumber\\ & \qquad\quad\otimes(\text{i}^{-1}t)^{k}(x^{+})^{n_{+}}(x^{3})^{n_{3} (x^{-})^{n_{-}}. \end{align} Once again, the plane wave solutions $u^{\mathbf{p}}(\mathbf{x},t)$ and $(u^{\ast})^{\mathbf{p}}(\mathbf{x},t)$ describe free particle states with definite energy and momentum. Due to Eqs.~(\ref{EigGleImpOpeImpEigFkt0}) and (\ref{ImpEigFktqDef2}), it hold \begin{align} u^{\mathbf{p}}(\mathbf{x},t)\,\bar{\triangleleft}\,\partial^{A}\text{i}^{-1} & =\hspace{0.01in}p^{A}\circledast u^{\mathbf{p}}(\mathbf{x},t),\nonumber\\ u^{\mathbf{p}}(\mathbf{x},t)\,\bar{\triangleleft}\,H_{0} & =-\hspace {0.01in}u^{\mathbf{p}}(\mathbf{x},t)\,\bar{\triangleleft}\,\partial ^{A}\partial_{A}(2\hspace{0.01in}m)^{-1}\nonumber\\ & =(2\hspace{0.01in}m)^{-1}\hspace{0.01in}\mathbf{p}^{2}\circledast u^{\mathbf{p}}(\mathbf{x},t) \end{align} an \begin{align} \text{i}^{-1}\partial^{A}\,\bar{\triangleright}\,(u^{\ast})^{\mathbf{p }(\mathbf{x},t) & =(u^{\ast})^{\mathbf{p}}(\mathbf{x},t)\circledast p^{A},\nonumber\\ H_{0}\,\bar{\triangleright}\,(u^{\ast})^{\mathbf{p}}(\mathbf{x},t) & =-(2\hspace{0.01in}m)^{-1}\hspace{0.01in}\partial^{A}\partial_{A \,\bar{\triangleright}\,(u^{\ast})^{\mathbf{p}}(\mathbf{x},t)\nonumber\\ & =(u^{\ast})^{\mathbf{p}}(\mathbf{x},t)\circledast\mathbf{p}^{2 (2\hspace{0.01in}m)^{-1}. \end{align} For the sake of completeness, we provide another method to obtain $q$-de\-formed Schr\"{o}\-dinger equations and their plane wave solutions. We only need to apply the following substitutions to the identities of the present chapter \begin{gather} \triangleright\,\leftrightarrow\,\bar{\triangleright},\qquad\triangleleft \,\leftrightarrow\,\bar{\triangleleft},\qquad\partial^{A}\,\leftrightarrow \,\hat{\partial}^{A},\qquad u\,\leftrightarrow\,\bar{u},\nonumber\\ +\,\leftrightarrow\,-,\qquad q\,\leftrightarrow\,q^{-1}. \end{gather} Due to these substitutions, we will not consider the momentum eigenfunctions $\bar{u}_{\hspace{0.01in}\mathbf{p}}$ and $(\bar{u}^{\ast})_{\hspace {0.01in}\mathbf{p}}$ or $\bar{u}^{\mathbf{p}}$ and $(\bar{u}^{\ast })^{\mathbf{p}}$ in the following. \section{Orthonormality and completeness\label{KapOrtVolEBeWel}} The $q$-de\-formed momentum eigenfunctions [cf. Eqs.~(\ref{ImpEigFktqDef}) and (\ref{DefDuaImpEigFktWdh}) of the previous chapter] form a complete orthonormal system of functions \cite{Kempf:1994yd,Wachter:2019A}. In the following, we will show that the same applies to the $q$-de\-formed plane waves derived in the previous chapter as solutions to the free Schr\"{o}\-dinger equations. We recall that the $q$-de\-formed momentum eigenfunctions fulfill the orthogonality relation \cite{Wachter:2019A \begin{align} \int\text{d}_{q}^{3}x\,(u^{\ast})_{\mathbf{p}}(\mathbf{x})\circledast u_{\hspace{0.01in}\mathbf{p}^{\prime}}(\mathbf{x}) & =\operatorname*{vol \nolimits^{-1}\hspace{-0.02in}\int\text{d}_{q}^{3}x\hspace{0.01in}\exp _{q}^{\ast}(\text{i}\mathbf{p}|\mathbf{x})\circledast\exp_{q}(\mathbf{x |\text{i}\mathbf{p}^{\prime})\nonumber\\ & =\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in}\delta_{q}^{\hspace {0.01in}3}((\ominus\hspace{0.01in}\kappa^{-1}\mathbf{p})\oplus\mathbf{p ^{\prime}) \label{SkaProEbeDreExpWie0 \end{align} o \begin{align} \int\text{d}_{q}^{3}x\,u^{\mathbf{p}}(\mathbf{x})\circledast(u^{\ast })^{\mathbf{p}^{\prime}}\hspace{-0.02in}(\mathbf{x}) & =\operatorname*{vol}\nolimits^{-1 \hspace{-0.02in}\int\text{d}_{q}^{3}x\hspace{0.01in}\exp_{q}(\text{i ^{-1}\mathbf{p}|\mathbf{x})\circledast\exp_{q}^{\ast}(\mathbf{x|}\text{i ^{-1}\mathbf{p}^{\prime})\nonumber\\ & =\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in}\delta_{q}^{\hspace {0.01in}3}(\hspace{0.01in}\mathbf{p}\oplus(\ominus\hspace{0.01in}\kappa ^{-1}\mathbf{p}^{\prime})). \end{align} We use the convention that an integral without limits is an integral over all space. $\delta_{q}^{\hspace{0.01in}3}(\hspace{0.01in}\mathbf{p})$ denotes a $q$-de\-formed version of the three-di\-men\-sion\-al delta function. Accordingly, we hav \begin{equation} \delta_{q}^{\hspace{0.01in}3}(\hspace{0.01in}\mathbf{p})=\int\text{d}_{q ^{3}x\hspace{0.01in}\exp_{q}(\text{i}^{-1}\mathbf{p}|\mathbf{x})=\int \text{d}_{q}^{3}x\hspace{0.01in}\exp_{q}^{\ast}(\mathbf{x|}\text{i ^{-1}\mathbf{p}) \end{equation} an \begin{equation} \operatorname*{vol}=\int\text{d}_{q}^{3}p\hspace{0.02in}\delta_{q ^{\hspace{0.01in}3}(\mathbf{p})=\int\text{d}_{q}^{3}p\int\text{d}_{q ^{3}x\hspace{0.01in}\exp_{q}(\text{i}^{-1}\mathbf{p}|\mathbf{x}). \label{VolEleDef \end{equation} In analogy to their undeformed counterparts, the $q$-de\-formed delta functions fulfill the following identities:\footnote{The occurrence of $\kappa^{-1}=q^{-6}$ indicates that the spatial coordinates are multiplied by that constant. \begin{align} f(\hspace{0.01in}\mathbf{y}) & =\operatorname*{vol}\nolimits^{-1 \int\text{d}_{q}^{3}x\,\delta_{q}^{\hspace {0.01in}3}(\hspace{0.01in}\mathbf{y}\oplus(\ominus\hspace{0.01in}\kappa ^{-1}\mathbf{x}))\circledast f(\mathbf{x})\nonumber\\ & =\operatorname*{vol}\nolimits^{-1}\in \text{d}_{q}^{3}x\,\delta_{q}^{\hspace{0.01in}3}((\ominus\hspace{0.01in \kappa^{-1}\mathbf{y})\oplus\mathbf{x})\circledast f(\mathbf{x})\nonumber\\ & =\operatorname*{vol}\nolimits^{-1}\in \text{d}_{q}^{3}x\,f(\mathbf{x})\circledast\delta_{q}^{\hspace{0.01in 3}((\ominus\hspace{0.01in}\kappa^{-1}\mathbf{x})\oplus\mathbf{y}))\nonumber\\ & =\operatorname*{vol}\nolimits^{-1}\in \text{d}_{q}^{3}x\,f(\mathbf{x})\circledast\delta_{q}^{\hspace{0.01in 3}(\mathbf{x}\oplus(\ominus\hspace{0.01in}\kappa^{-1}\mathbf{y})). \label{AlgChaIdeqDelFkt \end{align} From Eq.~(\ref{SkaProEbeDreExpWie0}) follows that the time-dependent $q$-de\-formed plane waves fulfill an orthonormality relation as well \begin{align} & \int\text{d}_{q}^{3}x\,(u^{\ast})_{\mathbf{p}}(\mathbf{x},t)\circledast u_{\hspace{0.01in}\mathbf{p}^{\prime}}(\mathbf{x},t)=\nonumber\\ & \qquad=\int\text{d}_{q}^{3}x\hspace{0.01in}\exp(\text{i}t\hspace {0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1})\circledast(u^{\ast })_{\mathbf{p}}(\mathbf{x})\circledast u_{\hspace{0.01in}\mathbf{p}^{\prime }(\mathbf{x})\circledast\exp(-\text{i}t\hspace{0.01in}\mathbf{p}^{\prime 2}(2\hspace{0.01in}m)^{-1})\nonumber\\ & \qquad=\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in}\exp(\text{i t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1})\circledast\delta _{q}^{\hspace{0.01in}3}((\ominus\hspace{0.01in}\kappa^{-1}\mathbf{p})\oplus\mathbf{p ^{\prime})\circledast\exp(-\text{i}t\hspace{0.01in}\mathbf{p}^{\prime 2}(2\hspace{0.01in}m)^{-1})\nonumber\\ & \qquad=\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in}\exp(\text{i t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1})\circledast \exp(-\text{i}t\hspace{0.01in}\mathbf{p}^{2}(2\hspace{0.01in}m)^{-1 )\circledast\delta_{q}^{\hspace{0.01in}3}((\ominus\hspace{0.01in}\kappa^{-1}\mathbf{p )\oplus\mathbf{p}^{\prime})\nonumber\\ & \qquad=\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in}\delta_{q ^{\hspace{0.01in}3}((\ominus\hspace{0.01in}\kappa^{-1}\mathbf{p )\oplus\mathbf{p}^{\prime}). \label{OrtRelEbeWel0Schr} \end{align} Likewise, it holds \begin{align} \int\text{d}_{q}^{3}x\,u^{\mathbf{p}}(\mathbf{x},t)\circledast(u^{\ast })^{\mathbf{p}^{\prime}}\hspace{-0.02in}(\mathbf{x},t) & =\int\text{d}_{q}^{3 x\,u^{\mathbf{p}}(\mathbf{x})\circledast(u^{\ast})^{\mathbf{p}^{\prime }\hspace{-0.02in}(\mathbf{x})\nonumber\\ & =\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in}\delta_{q}^{\hspace {0.01in}3}(\hspace{0.01in}\mathbf{p}\oplus(\ominus\hspace{0.01in}\kappa ^{-1}\mathbf{p}^{\prime})). \label{OrtRelEbeWel1Schr \end{align} Let $\phi_{R}(\mathbf{x},t)$ be a solution to a $q$-de\-formed Schr\"{o}\-dinger equation [cf. Eq.~(\ref{FreParSch1N}) of the previous chapter]. Remember that the $q$-de\-formed momentum eigenfunctions $u_{\hspace{0.01in}\mathbf{p }(\mathbf{x})$ form a complete set of functions \cite{Wachter:2019A}. Thus, we can write the function $\phi_{R}(\mathbf{x},t=0)$ as a series expansion in terms of these momentum eigenfunctions, i.~e \begin{equation} \phi_{R}(\mathbf{x},0)=\phi_{R}(\mathbf{x},0)=\int\text{d}_{q}^{3 p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x})\circledast c_{\hspace {0.01in}\mathbf{p} \end{equation} wit \begin{equation} c_{\hspace{0.01in}\mathbf{p}}=\int\text{d}_{q}^{3}x\,(u^{\ast})_{\mathbf{p }(\mathbf{x})\circledast\phi_{R}(\mathbf{x},0). \end{equation} For this reason, there is also a series expansion of $\phi_{R}(\mathbf{x},t)$ in terms of the time-dependent plane waves $u_{\hspace{0.01in}\mathbf{p }(\mathbf{x},t)$ \begin{align} \phi_{R}(\mathbf{x},t) & =\exp(-\text{i}tH_{0})\triangleright\phi _{R}(\mathbf{x},0)=\exp(-\text{i}tH_{0})\triangleright\int\text{d}_{q ^{3}p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x})\circledast c_{\hspace {0.01in}\mathbf{p}}\nonumber\\ & =\int\text{d}_{q}^{3}p\,\exp(-\text{i}tH_{0})\triangleright u_{\hspace {0.01in}\mathbf{p}}(\mathbf{x})\circledast c_{\hspace{0.01in}\mathbf{p} =\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t)\circledast c_{\hspace{0.01in}\mathbf{p}}. \label{ExpQR \end{align} Moreover, we can calculate the coefficients $c_{\hspace {0.01in}\mathbf{p}}$ as follows \begin{align} \int\text{d}_{q}^{3}x\,(u^{\ast})_{\mathbf{p}}(\mathbf{x},t)\circledast \phi_{R}(\mathbf{x},t) & =\int\text{d}_{q}^{3}x\,(u^{\ast})_{\mathbf{p }(\mathbf{x},t)\circledast\hspace{-0.02in}\int\text{d}_{q}^{3}p^{\prime }\,u_{\hspace{0.01in}\mathbf{p}^{\prime}}(\mathbf{x},t)\circledast c_{\hspace{0.01in}\mathbf{p}^{\prime}}\nonumber\\ & =\int\text{d}_{q}^{3}p^{\prime}\hspace{-0.02in}\int\text{d}_{q ^{3}x\,(u^{\ast})_{\mathbf{p}}(\mathbf{x},t)\circledast u_{\hspace {0.01in}\mathbf{p}^{\prime}}(\mathbf{x},t)\circledast c_{\hspace {0.01in}\mathbf{p}^{\prime}}\nonumber\\ & =\int\text{d}_{q}^{3}p^{\prime}\operatorname*{vol}\nolimits^{-1}\hspace {-0.01in}\delta_{q}^{\hspace{0.01in}3}(\hspace{0.01in}\mathbf{p}\oplus (\ominus\hspace{0.01in}\kappa^{-1}\mathbf{p}^{\prime}))\circledast c_{\hspace{0.01in}\mathbf{p}^{\prime}}\nonumber\\ & =c_{\hspace{0.01in}\mathbf{p}}. \label{ExpKoeQR \end{align} The same considerations apply to the solutions of the other $q$-de\-formed versions of the Schr\"{o}\-dinger equation. This way, we ge \begin{equation} \phi_{L}(\mathbf{x},t)=\int\text{d}_{q}^{3}p\,c^{\hspace{0.01in}\mathbf{p }\hspace{-0.01in}\circledast u^{\mathbf{p}}(\mathbf{x},t) \label{EntWelFktEbeDreDim1 \end{equation} an \begin{align} \phi_{R}^{\ast}(\mathbf{x},t) & =\int\text{d}_{q}^{3}p\,(u^{\ast })^{\mathbf{p}}(\mathbf{x},t)\circledast(c^{\ast})^{\mathbf{p} ,\nonumber\\[0.1in] \phi_{L}^{\ast}(\mathbf{x},t) & =\int\text{d}_{q}^{3}p\,(c^{\ast })_{\mathbf{p}}\circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x},t). \label{EntWelFktEbeDreDim2 \end{align} For the coefficients in the above series expansions, we hav \begin{equation} c^{\hspace{0.01in}\mathbf{p}}=\int\text{d}_{q}^{3}x\,\phi_{L}(\mathbf{x ,t)\circledast(u^{\ast})^{\mathbf{p}}(\mathbf{x},t) \label{BesEntKoeSchrEbe1 \end{equation} an \begin{align} (c^{\ast})_{\mathbf{p}} & =\int\text{d}_{q}^{3}x\,\phi_{L}^{\ast (\mathbf{x},t)\circledast u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x ,t),\nonumber\\ (c^{\ast})^{\mathbf{p}} & =\int\text{d}_{q}^{3}x\,u^{\mathbf{p} (\mathbf{x},t)\circledast\phi_{R}^{\ast}(\mathbf{x},t). \label{BesEntKoeSchrEbe2 \end{align} The above expressions for the coefficients and the behavior of the free Schr\"{o}\-din\-ger wave functions under quantum space conjugation [cf. Eqs.~(\ref{VerKonWelFkt}) and (\ref{KonEbeWel}) in Chap.~\ref{LoeSchGleKap}] imply the following conjugation properties \begin{equation} \overline{c^{\hspace{0.01in}\mathbf{p}}}=c_{\hspace{0.01in}\mathbf{p },\text{\qquad}\overline{(c^{\ast})^{\mathbf{p}}}=(c^{\ast})_{\mathbf{p}}. \label{KonBedEntKoe \end{equation} Finally, we determine \textit{completeness relations }for our $q$-de\-formed plane waves. To this end, we consider the series expansion of $\phi_{R (\mathbf{x},t)$ in terms of the plane waves $u_{\hspace{0.01in}\mathbf{p }(\mathbf{x},t)$ [cf. Eq.~(\ref{ExpQR})] and insert the expression for the coefficients $c_{\hspace{0.01in}\mathbf{p}}$ [cf. Eq.~(\ref{ExpKoeQR})] \begin{align} \phi_{R}(\mathbf{x},t) & =\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in \mathbf{p}}(\mathbf{x},t)\circledast c_{\hspace{0.01in}\mathbf{p}}\nonumber\\ & =\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x ,t)\circledast\hspace{-0.01in}\int\text{d}_{q}^{3}y\,(u^{\ast})_{\mathbf{p }(\hspace{0.01in}\mathbf{y},t)\circledast\phi_{R}(\hspace{0.01in \mathbf{y},t)\nonumber\\ & =\int\text{d}_{q}^{3}y\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in}\mathbf{p }(\mathbf{x},t)\circledast(u^{\ast})_{\mathbf{p}}(\hspace{0.01in \mathbf{y},t)\circledast\phi_{R}(\hspace{0.01in}\mathbf{y},t). \label{RecVolRelZeiEbeWel1 \end{align} Comparing the above result with the identities in Eq.~(\ref{AlgChaIdeqDelFkt ), we find the following completeness relation \begin{equation} \int\text{d}_{q}^{3}p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x},t)\circledast (u^{\ast})_{\mathbf{p}}(\hspace{0.01in}\mathbf{y},t)=\operatorname*{vol \nolimits^{-1}\hspace{-0.01in}\delta_{q}^{\hspace{0.01in}3}(\mathbf{x}\oplus(\ominus \hspace{0.01in}\kappa^{-1}\mathbf{y})). \label{VolRelZeiWelDreDim1 \end{equation} In the same manner, we get \begin{equation} \int\text{d}_{q}^{3}p\,(u^{\ast})^{\mathbf{p}}(\hspace{0.01in}\mathbf{y ,t)\circledast u^{\mathbf{p}}(\mathbf{x},t)=\operatorname*{vol}\nolimits^{-1 \hspace{-0.01in}\delta_{q}^{\hspace{0.01in}3}((\ominus\hspace{0.01in}\kappa^{-1 \mathbf{y})\oplus\mathbf{x}). \label{VolRelZeiWelDreDim2 \end{equation} \section{Free particle propagators\label{KapProSchrFel}} If we know the wave function of a quantum system at a given time, we can find the wave function at any time with the help of the time evolution operator [also see Eq.~(\ref{AnwZeitEnt}) of Chap.~\ref{LoeSchGleKap}]. We can also use the propagator to solve the time evolution problem. In this chapter, we give $q$-de\-formed expressions for the propagator of a free nonrelativistic particle. Additionally, we are going to derive some important properties of these $q$-de\-formed propagators. As shown in the previous chapter, we can write solutions to the $q$-de\-formed Schr\"{o}\-din\-ger equations of a free nonrelativistic particle as a series expansion in terms of plane waves [cf. Eqs.~(\ref{ExpQR}), (\ref{EntWelFktEbeDreDim1}), and (\ref{EntWelFktEbeDreDim2}) of the previous chapter], i.~e \begin{align} \phi_{R}(\mathbf{x},t) & =\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in \mathbf{p}}(\mathbf{x},t)\circledast c_{\hspace{0.01in}\mathbf{p}},\nonumber\\ \phi_{L}(\mathbf{x},t) & =\int\text{d}_{q}^{3}p\,c^{\hspace{0.01in \mathbf{p}}\hspace{-0.01in}\circledast u^{\mathbf{p}}(\mathbf{x},t), \label{EntWicEbeWelDreDim1 \end{align} an \begin{align} \phi_{R}^{\ast}(\mathbf{x},t) & =\int\text{d}_{q}^{3}p\,(u^{\ast })^{\mathbf{p}}(\mathbf{x},t)\circledast(c^{\ast})^{\mathbf{p} ,\nonumber\\[0.1in] \phi_{L}^{\ast}(\mathbf{x},t) & =\int\text{d}_{q}^{3}p\,(c^{\ast })_{\mathbf{p}}\circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x},t). \label{EntWicEbeWelDreDim2 \end{align} Furthermore, we know how to calculate the corresponding coefficients from the wave functions [cf. Eqs.~(\ref{ExpKoeQR}), (\ref{BesEntKoeSchrEbe1}), and (\ref{BesEntKoeSchrEbe2}) of the previous chapter], i.~e \begin{align} c_{\hspace{0.01in}\mathbf{p}} & =\int\text{d}_{q}^{3}x\hspace{0.01in (u^{\ast})_{\mathbf{p}}(\mathbf{x},t)\circledast\phi_{R}(\mathbf{x ,t),\nonumber\\ c^{\hspace{0.01in}\mathbf{p}} & =\int\text{d}_{q}^{3}x\,\phi_{L (\mathbf{x},t)\circledast(u^{\ast})^{\mathbf{p}}(\mathbf{x},t), \label{EntKoeDreDim1Wie \end{align} an \begin{align} (c^{\ast})_{\mathbf{p}} & =\int\text{d}_{q}^{3}x\,\phi_{L}^{\ast (\mathbf{x},t)\circledast u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x ,t),\nonumber\\ (c^{\ast})^{\mathbf{p}} & =\int\text{d}_{q}^{3}x\,u^{\mathbf{p} (\mathbf{x},t)\circledast\phi_{R}^{\ast}(\mathbf{x},t). \label{EntKoeDreDim4Wie \end{align} Next, we derive formulas for the $q$-de\-formed propagators of the free nonrelativistic particle. We insert the expressions from Eq.~(\ref{EntKoeDreDim1Wie}) or Eq.~(\ref{EntKoeDreDim4Wie}) into Eq.~(\ref{EntWicEbeWelDreDim1}) or Eq.~(\ref{EntWicEbeWelDreDim2}) and obtain the integral equation \begin{align} \phi_{R}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime}) & =\int\text{d}_{q}^{3}x\,K_{R}(\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}\hspace {-0.01in};\mathbf{x},t)\circledast\phi_{R}(\mathbf{x},t),\nonumber\\ \phi_{L}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime}) & =\int\text{d}_{q}^{3}x\,\phi_{L}(\mathbf{x},t)\circledast K_{L}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}), \label{DefProDreDim1 \end{align} o \begin{align} \phi_{R}^{\ast}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}) & =\int\text{d}_{q}^{3}x\,K_{R}^{\ast }(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime }\hspace{-0.01in};\mathbf{x},t)\circledast\phi_{R}^{\ast}(\mathbf{x ,t),\nonumber\\[0.1in] \phi_{L}^{\ast}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}) & =\int\text{d}_{q}^{3}x\,\phi_{L}^{\ast }(\mathbf{x},t)\circledast K_{L}^{\ast}(\mathbf{x},t;\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}) \label{DefProDreDim2 \end{align} with the integral kernel \begin{align} K_{R}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime}\hspace{-0.01in};\mathbf{x},t) & =\int\text{d}_{q ^{3}p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x}^{\hspace{0.01in}\prime },t^{\hspace{0.01in}\prime})\circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x ,t),\nonumber\\ K_{L}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}) & =\int\text{d}_{q}^{3}p\,(u^{\ast )^{\mathbf{p}}(\mathbf{x},t)\circledast u^{\mathbf{p}}(\mathbf{x ^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime}), \label{IntKer1 \end{align} o \begin{align} K_{R}^{\ast}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime}\hspace{-0.01in};\mathbf{x},t) & =\int\text{d}_{q ^{3}p\,(u^{\ast})^{\mathbf{p}}(\mathbf{x}^{\hspace{0.01in}\prime ,t^{\hspace{0.01in}\prime})\circledast u^{\mathbf{p}}(\mathbf{x ,t),\nonumber\\ K_{L}^{\ast}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime}) & =\int\text{d}_{q}^{3}p\,u_{\hspace {0.01in}\mathbf{p}}(\mathbf{x},t)\circledast(u^{\ast})_{\mathbf{p} (\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime}). \label{IntKer2 \end{align} Comparing Eq.~(\ref{IntKer1}) and Eq.~(\ref{IntKer2}) gives us \begin{align} K_{R}(\mathbf{x}_{1},t_{1};\mathbf{x}_{2},t_{2}) & =K_{L}^{\ast (\mathbf{x}_{1},t_{1};\mathbf{x}_{2},t_{2}),\nonumber\\ K_{L}(\mathbf{x}_{1},t_{1};\mathbf{x}_{2},t_{2}) & =K_{R}^{\ast (\mathbf{x}_{1},t_{1};\mathbf{x}_{2},t_{2}). \label{ZusProSch \end{align} The propagators must satisfy the principle of causality to describe the time evolution of the Schr\"{o}\-dinger wave functions correctly. The wave function at time $t$ cannot depend on the wave function at later times $t^{\hspace {0.01in}\prime}>t$. The \textbf{retarded propagators} satisfy this requirement \begin{align} (K_{R})^{+}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime}\hspace{-0.01in};\mathbf{x},t) & =\theta(t^{\hspace {0.01in}\prime}\hspace{-0.02in}-t)\,K_{R}(\mathbf{x}^{\hspace{0.01in}\prime }\hspace{-0.01in},t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x ,t),\nonumber\\ (K_{L})^{+}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime}) & =\theta(t^{\hspace{0.01in}\prime }\hspace{-0.02in}-t)\,K_{L}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime }\hspace{-0.01in},t^{\hspace{0.01in}\prime}),\label{DefRetPro1}\\[0.1in] (K_{R}^{\ast})^{+}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x},t) & =\theta (t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t)\,K_{R}^{\ast}(\mathbf{x ^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime \hspace{-0.01in};\mathbf{x},t),\nonumber\\ (K_{L}^{\ast})^{+}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime}) & =\theta(t^{\hspace {0.01in}\prime}\hspace{-0.02in}-t)\,K_{L}^{\ast}(\mathbf{x},t;\mathbf{x ^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}). \label{DefRetPro2 \end{align} Note that $\theta(t)$ stands for the Heaviside function \begin{equation} \theta(t) \begin{cases} 1 & \text{if }t\geq0,\\ 0 & \text{otherwise}. \end{cases} \end{equation} The \textbf{advanced propagators}, on the other hand, describe the propagation of a wave function backward in time \begin{align} (K_{R})^{-}(\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime }\hspace{-0.01in};\mathbf{x},t) & =\theta(t-t^{\hspace{0.01in}\prime })\,K_{R}(\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime \hspace{-0.01in};\mathbf{x},t),\nonumber\\ (K_{L})^{-}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace {0.01in}\prime}) & =\theta(t-t^{\hspace{0.01in}\prime})\,K_{L (\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime }),\label{DefAdvPro1}\\[0.1in] (K_{R}^{\ast})^{-}(\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in \prime}\hspace{-0.01in};\mathbf{x},t) & =\theta(t-t^{\hspace{0.01in}\prime })\,K_{R}^{\ast}(\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime }\hspace{-0.01in};\mathbf{x},t),\nonumber\\ (K_{L}^{\ast})^{-}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime}) & =\theta(t-t^{\hspace {0.01in}\prime})\,K_{L}^{\ast}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime }\hspace{-0.01in},t^{\hspace{0.01in}\prime}). \label{DefAdvPro2 \end{align} From Eq.~(\ref{ZusProSch}) follows \begin{align} (K_{R})^{\pm}(\mathbf{x}_{1},t_{1};\mathbf{x}_{2},t_{2}) & =(K_{L}^{\ast })^{\mp}(\mathbf{x}_{1},t_{1};\mathbf{x}_{2},t_{2}),\nonumber\\ (K_{L})^{\pm}(\mathbf{x}_{1},t_{1};\mathbf{x}_{2},t_{2}) & =(K_{R}^{\ast })^{\mp}(\mathbf{x}_{1},t_{1};\mathbf{x}_{2},t_{2}). \label{ZusGreAvaRetN \end{align} The propagators in Eqs.~(\ref{DefRetPro1}) and (\ref{DefRetPro2}) are solutions to inhomogeneous wave equations. We show this for the propagator $(K_{R})^{+}(\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime };\mathbf{x},t)$ \begin{align} & \text{i}\partial_{t^{\prime}}\triangleright(K_{R})^{+}(\mathbf{x ^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime \hspace{-0.01in};\mathbf{x},t)=\nonumber\\ & \qquad=\Big (\text{i}\frac{\partial}{\partial\hspace{0.01in}t^{\hspace {0.01in}\prime}}\theta(t^{\hspace{0.01in}\prime}\hspace{-0.02in -t)\Big )\,K_{R}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x},t)+\theta(t^{\hspace {0.01in}\prime}\hspace{-0.02in}-t)\,\text{i}\frac{\partial}{\partial \hspace{0.01in}t^{\hspace{0.01in}\prime}}K_{R}(\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}\hspace {-0.01in};\mathbf{x},t)\nonumber\\ & \qquad=\,\text{i\hspace{0.01in}}\delta(t^{\hspace{0.01in}\prime \hspace{-0.02in}-t)\,K_{R}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x},t)+\theta (t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t)\,H_{0}^{\prime}\triangleright K_{R}(\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime ;\mathbf{x},t)\nonumber\\[0.03in] & \qquad=\,\text{i}\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in \delta(t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t)\,\delta_{q}^{\hspace{0.01in}3 (\mathbf{x}^{\hspace{0.01in}\prime}\oplus(\ominus\hspace{0.01in}\kappa ^{-1}\mathbf{x}))+H_{0}^{\prime}\triangleright(K_{R})^{+}(\mathbf{x ^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime};\mathbf{x},t). \label{SchrGlGreHab \end{align} In the second step of the calculation above, we have taken into account that applying the time derivative to the Heaviside function gives the classical delta function. Moreover, we have used the result of the following calculation [see Eq.~(\ref{IntKer1}) as well as Eq.~(\ref{FreSchGlImp0}) of Chap.~\ref{LoeSchGleKap} \begin{align} \text{i}\frac{\partial}{\partial\hspace{0.01in}t^{\hspace{0.01in}\prime} K_{R}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime}\hspace{-0.01in};\mathbf{x},t) & =\int\text{d}_{q ^{3}p\,\frac{\partial}{\partial\hspace{0.01in}t^{\hspace{0.01in}\prime }u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x}^{\hspace{0.01in}\prime ,t^{\hspace{0.01in}\prime})\circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x ,t)\nonumber\\ & =\int\text{d}_{q}^{3}p\,H_{0}\overset{x^{\prime}}{\triangleright u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x}^{\hspace{0.01in}\prime ,t^{\hspace{0.01in}\prime})\circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x ,t)\nonumber\\ & =H_{0}\overset{x^{\prime}}{\triangleright}K_{R}(\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}\hspace {-0.01in};\mathbf{x},t). \end{align} Note that the last step in Eq.~(\ref{SchrGlGreHab}) follows from the following identities [cf. Eq.~(\ref{VolRelZeiWelDreDim1}) of the previous chapter] \begin{align} \lim_{t^{\prime}\rightarrow\hspace{0.01in}t}\,K_{R}(\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}\hspace {-0.01in};\mathbf{x},t) & =\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in \mathbf{p}}(\mathbf{x}^{\hspace{0.01in}\prime})\circledast(u^{\ast })_{\mathbf{p}}(\mathbf{x})\nonumber\\ & =\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in}\delta_{q}^{\hspace{0.01in}3 (\mathbf{x}^{\hspace{0.01in}\prime}\oplus(\ominus\hspace{0.01in}\kappa ^{-1}\mathbf{x})). \label{RanBedKLSch \end{align} The same arguments yield the following result for the advanced propagator \begin{equation} (\text{i}\partial_{t^{\prime}}-H_{0}^{\prime})\triangleright(K_{R )^{-}(\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime ;\mathbf{x},t)=-\,\text{i}\operatorname*{vol}\nolimits^{-1}\hspace {-0.01in}\delta(t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t)\,\delta_{q ^{\hspace{0.01in}3}(\mathbf{x}^{\hspace{0.01in}\prime}\oplus(\ominus\hspace{0.01in \kappa^{-1}\mathbf{x})). \label{SchrGlGreHab2 \end{equation} The other $q$-de\-formed versions of the Schr\"{o}\-dinger propagator satisfy similar wave equations, i.~e \begin{equation} (K_{L})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime})\,\bar{\triangleleft}\,(\partial _{t^{\prime}}\text{i}-H_{0}^{\prime})=\mp\,\text{i}\operatorname*{vol \nolimits^{-1}\hspace{-0.01in}\delta(t-t^{\hspace{0.01in}\prime})\,\delta _{q}^{\hspace{0.01in}3}((\ominus\hspace{0.01in}\kappa^{-1}\mathbf{x})\oplus\mathbf{x ^{\hspace{0.01in}\prime}) \end{equation} o \begin{align} (\text{i}\partial_{t^{\prime}}-H_{0}^{\prime})\,\bar{\triangleright \,(K_{R}^{\ast})^{\pm}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x},t) & =\pm\,\text{i}\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in}\delta (t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t)\,\delta_{q}^{\hspace{0.01in}3}((\ominus \hspace{0.01in}\kappa^{-1}\mathbf{x}^{\hspace{0.01in}\prime})\oplus \mathbf{x}),\nonumber\\ (K_{L}^{\ast})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime})\triangleleft(\partial_{t^{\prime }\text{i}-H_{0}^{\prime}) & =\mp\,\text{i}\operatorname*{vol}\nolimits^{-1 \hspace{-0.01in}\delta(t-t^{\hspace{0.01in}\prime})\,\delta_{q}^{\hspace{0.01in}3 (\mathbf{x}\oplus(\ominus\hspace{0.01in}\kappa^{-1}\mathbf{x}^{\hspace {0.01in}\prime})). \label{SchrGlGreHabEnd \end{align} We can use our $q$-de\-formed Schr\"{o}\-dinger propagators to get solutions to the following inhomogeneous wave equations \begin{align} (\text{i}\partial_{t}-H_{0})\triangleright(\psi_{R})^{\pm}(\mathbf{x},t) & =\varrho(\mathbf{x},t),\nonumber\\ (\psi_{L})^{\pm}(\mathbf{x},t)\,\bar{\triangleleft}\,(\partial_{t \text{i\hspace{0.01in}}-H_{0}) & =\varrho(\mathbf{x ,t),\label{InhSchrGleAll}\\[0.1in] (\text{i}\partial_{t}-H_{0})\,\bar{\triangleright}\,(\psi_{R}^{\ast})^{\pm }(\mathbf{x},t) & =\varrho(\mathbf{x},t),\nonumber\\ (\psi_{L}^{\ast})^{\pm}(\mathbf{x},t)\triangleleft(\partial_{t}\text{i\hspace {0.01in}}-H_{0}) & =\varrho(\mathbf{x},t). \end{align} Due Eq.~(\ref{SchrGlGreHab}) and Eqs.~(\ref{SchrGlGreHab2 )-(\ref{SchrGlGreHabEnd}), these solutions ar \begin{align} (\psi_{R})^{\pm}(\mathbf{x},t) & =\mp\,\text{i}\int\text{d}t^{\hspace {0.01in}\prime}\hspace{-0.01in}\int\text{d}_{q}^{3}x^{\prime}\,(K_{R})^{\pm }(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime})\circledast\varrho(\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}),\nonumber\\ (\psi_{L})^{\pm}(\mathbf{x},t) & =\pm\,\text{i}\int\text{d}t^{\hspace {0.01in}\prime}\hspace{-0.01in}\int\text{d}_{q}^{3}x^{\prime}\,\varrho (\mathbf{x}^{\hspace{0.01in}\prime},t^{\hspace{0.01in}\prime})\circledast (K_{L})^{\pm}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x ,t),\label{SolInhSchr1Hab \end{align} and \begin{align} (\psi_{R}^{\ast})^{\pm}(\mathbf{x},t) & =\mp\,\text{i}\int\text{d t^{\hspace{0.01in}\prime}\hspace{-0.01in}\int\text{d}_{q}^{3}x^{\prime }\,(K_{R}^{\ast})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime})\circledast\varrho(\mathbf{x ^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime }),\nonumber\\ (\psi_{L}^{\ast})^{\pm}(\mathbf{x},t) & =\pm\,\text{i}\int\text{d t^{\hspace{0.01in}\prime}\hspace{-0.01in}\int\text{d}_{q}^{3}x^{\prime }\,\varrho(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime})\circledast(K_{L}^{\ast})^{\pm}(\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}\hspace {-0.01in};\mathbf{x},t). \label{SolInhSchr2Hab \end{align} By way of example, we check that the expression for $(\psi_{R})^{\pm }(\mathbf{x},t)$ satisfies the first identity in\ Eq.~(\ref{InhSchrGleAll}) \begin{align} & (\text{i}\partial_{t}-H_{0})\triangleright(\psi_{R})^{\pm}(\mathbf{x ,t)=\nonumber\\ & \qquad=\mp\,\text{i}\int\text{d}t^{\hspace{0.01in}\prime}\int\text{d _{q}^{3}x^{\prime}\,(\text{i}\partial_{t}-H_{0})\triangleright(K_{R})^{\pm }(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime})\circledast\varrho(\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime})\nonumber\\ & \qquad=\operatorname*{vol}\nolimits^{-1}\hspace{-0.02in}\int\text{d t^{\hspace{0.01in}\prime}\,\delta(t-t^{\hspace{0.01in}\prime})\int\text{d _{q}^{3}x^{\prime}\,\delta_{q}^{\hspace{0.01in}3}(\mathbf{x}\oplus(\ominus\hspace {0.01in}\kappa^{-1}\mathbf{x}^{\hspace{0.01in}\prime}))\circledast \varrho(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime})\nonumber\\ & \qquad=\operatorname*{vol}\nolimits^{-1}\hspace{-0.02in}\int\text{d _{q}^{3}x^{\prime}\,\delta_{q}^{\hspace{0.01in}3}(\mathbf{x}\oplus(\ominus\hspace {0.01in}\kappa^{-1}\mathbf{x}^{\hspace{0.01in}\prime}))\circledast \varrho(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t)=\varrho (\mathbf{x},t). \end{align} Note that the second step of the above calculation results from Eq.~(\ref{SchrGlGreHab}). In the last step, we made use of the identities given in Eq.~(\ref{AlgChaIdeqDelFkt}) of Chap.~\ref{KapOrtVolEBeWel}. Next, we derive some useful identities for the $q$-de\-formed Schr\"{o}\-dinger propagators. We multiply both sides of the integral equations given in Eqs.~(\ref{DefProDreDim1}) and (\ref{DefProDreDim2}) by the Heaviside function and take into account Eqs.~(\ref{DefRetPro1}) and (\ref{DefRetPro2}). Thus, we ge \begin{align} \theta(t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t)\,\phi_{R}(\mathbf{x ^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}) & =\int\text{d}_{q}^{3}x\,(K_{R})^{+}(\mathbf{x}^{\hspace{0.01in}\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x ,t)\circledast\phi_{R}(\mathbf{x},t),\nonumber\\ \theta(t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t)\,\phi_{L}(\mathbf{x ^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}) & =\int\text{d}_{q}^{3}x\,\phi_{L}(\mathbf{x},t)\circledast(K_{L})^{+ (\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}), \label{ChaGleSchPro1 \end{align} an \begin{align} \theta(t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t)\,\phi_{R}^{\ast }(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime }) & =\int\text{d}_{q}^{3}x\,(K_{R}^{\ast})^{+}(\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}\hspace {-0.01in};\mathbf{x},t)\circledast\phi_{R}^{\ast}(\mathbf{x},t),\nonumber\\ \theta(t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t)\,\phi_{L}^{\ast }(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime }) & =\int\text{d}_{q}^{3}x\,\phi_{L}^{\ast}(\mathbf{x},t)\circledast (K_{L}^{\ast})^{+}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime}). \label{ChaGleSchPro2 \end{align} Similar relations hold for the advanced propagators. Applying the first identity of Eq.~(\ref{ChaGleSchPro1}) twice, we obtain \begin{align} \phi_{R}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime})= & \int\text{d}_{q}^{3}x\,(K_{R})^{+}(\mathbf{x ^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime \hspace{-0.01in};\mathbf{x},t)\circledast\phi_{R}(\mathbf{x},t)\nonumber\\ = & \int\text{d}_{q}^{3}x\int\text{d}_{q}^{3}x^{\hspace{0.01in}\prime\prime }(K_{R})^{+}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace {0.01in}\prime}\hspace{-0.01in};\mathbf{x}^{\hspace{0.01in}\prime\prime }\hspace{-0.01in},t^{\hspace{0.01in}\prime\prime})\nonumber\\ & \circledast(K_{R})^{+}(\mathbf{x}^{\hspace{0.01in}\prime\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime\prime}\hspace{-0.01in ;\mathbf{x},t)\circledast\phi_{R}(\mathbf{x},t). \label{HerZusPro \end{align} Since we have assumed $t\leq t^{\hspace{0.01in}\prime\prime}\leq t^{\hspace{0.01in}\prime}$ for the retarded propagators, we could omit the Heaviside functions in the expressions above. By comparing the second expression in Eq.~(\ref{HerZusPro}) to the last one, we can see that the following identity holds:\footnote{For the advanced propagators, we have $t^{\hspace{0.01in}\prime}\leq t^{\hspace{0.01in}\prime\prime}\leq t$. \begin{equation} (K_{R})^{\pm}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x},t)=\int\text{d}_{q ^{3}x^{\hspace{0.01in}\prime\prime}(K_{R})^{\pm}(\mathbf{x}^{\hspace {0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime}\hspace {-0.01in};\mathbf{x}^{\hspace{0.01in}\prime\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime\prime})\circledast(K_{R})^{\pm}(\mathbf{x ^{\hspace{0.01in}\prime\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime\prime }\hspace{-0.01in};\mathbf{x},t). \label{ComFree1Hab \end{equation} In the same manner, we get \begin{equation} (K_{L})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime})=\int\text{d}_{q}^{3}x^{\hspace {0.01in}\prime\prime}(K_{L})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace {0.01in}\prime\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime\prime })\circledast(K_{L})^{\pm}(\mathbf{x}^{\hspace{0.01in}\prime\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime\prime}\hspace{-0.01in ;\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime }). \end{equation} We can also show how the $q$-de\-formed Schr\"{o}\-dinger propagators behave under conjugation. From the conjugation properties of the $q$-de\-formed plane waves [cf. Eq.~(\ref{KonEbeWel}) in Chap.~\ref{LoeSchGleKap}] together with the formulas given in Eqs.~(\ref{IntKer1}) and (\ref{IntKer2}) follows \begin{align} \overline{(K_{R})^{\pm}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x},t)} & =(K_{L})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime}),\nonumber\\ (\overline{K_{L}^{\ast})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime }\hspace{-0.01in},t^{\hspace{0.01in}\prime})} & =(K_{R}^{\ast})^{\pm }(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in},t^{\hspace{0.01in}\prime }\hspace{-0.01in};\mathbf{x},t). \label{KonFreProNR \end{align} Finally, we show how to derive the momentum space form of the $q$-de\-formed Schr\"{o}\-dinger propagators. We demonstrate our considerations using the example of the propagator $(K_{R})^{\pm}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x},t)$ \begin{align} & (K_{R})^{\pm}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace{-0.01in ,t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x},t)=\theta(\pm (t^{\hspace{0.01in}\prime}\hspace{-0.02in}-t))\int\text{d}_{q}^{3 p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x}^{\hspace{0.01in}\prime ,t^{\hspace{0.01in}\prime})\circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x ,t)\nonumber\\ & =\frac{\pm\hspace{0.01in}\text{i}}{2\pi}\lim_{\varepsilon\hspace {0.01in}\rightarrow\hspace{0.01in}0^{+}}\int\nolimits_{-\infty}^{+\infty }\text{d}E\,\frac{\operatorname{e}^{\text{i}E(t-t^{\prime})}}{E\pm \text{i}\varepsilon}\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in}\mathbf{p }(\mathbf{x}^{\hspace{0.01in}\prime})\circledast\exp\left( \frac {\text{i}\mathbf{p}^{2}(t-t^{\hspace{0.01in}\prime})}{2\hspace{0.01in m}\right) \circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x})\nonumber\\ & =\frac{1}{2\pi}\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in}\mathbf{p }(\mathbf{x}^{\hspace{0.01in}\prime})\circledast\left( \int\text{d E\operatorname{e}^{\text{i}E(t-t^{\prime})}(K_{R})^{\pm}(\mathbf{p},E)\right) \circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x}). \label{GreFktImpRau \end{align} In the first step of the above calculation, we wrote the Heaviside function as an integral. Next, we replaced the energy variable $E$ by $E-\mathbf{p}^{2}/(2\hspace{0.01in}m)$. The later is possible because $\mathbf{p}^{2}$ is a central element of the momentum algebra. This way, we can read off the Schr\"{o}\-din\-ger propagator in momentum space \begin{equation} (K_{R})^{\pm}(\mathbf{p},E)=\pm\hspace{0.01in}\text{i}(E-\mathbf{p ^{2}/(2\hspace{0.01in}m)\pm\,\text{i}\varepsilon)^{-1}. \label{SchProImpL \end{equation} Note that we must write the Schr\"{o}\-dinger propagator in momentum space as a series in nor\-mal-or\-dered monomials of momentum coordinates. To get this series, we first write the right-hand side of Eq.~(\ref{SchProImpL}) as a power series of $\mathbf{p}^{2}$. Then we apply the formula in Eq.~(\ref{EntPotP}) of Chap.~\ref{LoeSchGleKap}. Similar reasonings hold for the other $q$-de\-formed versions of the Schr\"{o}\-dinger propagator. Thus we also hav \begin{gather} (K_{L})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime})=\theta(\pm(t^{\hspace{0.01in}\prime }\hspace{-0.02in}-t))\int\text{d}_{q}^{3}p\,(u^{\ast})^{\mathbf{p} (\mathbf{x},t)\circledast u^{\mathbf{p}}(\mathbf{x}^{\hspace{0.01in}\prime },t^{\hspace{0.01in}\prime})\nonumber\\ =\frac{1}{2\pi}\int\text{d}_{q}^{3}p\,(u^{\ast})^{\mathbf{p}}(\mathbf{x )\circledast\left( \int\text{d}E\operatorname{e}^{\text{i}E(t-t^{\prime )}(K_{L})^{\pm}(\mathbf{p},E)\right) \circledast u^{\mathbf{p} (\mathbf{x}^{\hspace{0.01in}\prime}), \label{GreFktImpRau2 \end{gather} wher \begin{equation} (K_{L})^{\pm}(\mathbf{p},E)=\pm\hspace{0.01in}\text{i}(E+\mathbf{p ^{2}/(2\hspace{0.01in}m)\pm\text{i}\varepsilon)^{-1}. \label{SchProImpR \end{equation} Taking into account Eq.~(\ref{ZusGreAvaRetN}), we also hav \begin{align} & (K_{R}^{\ast})^{\pm}(\mathbf{x}^{\hspace{0.01in}\prime}\hspace {-0.01in},t^{\hspace{0.01in}\prime}\hspace{-0.01in};\mathbf{x},t)=\nonumber\\ & \qquad=\frac{1}{2\pi}\int\text{d}_{q}^{3}p\,(u^{\ast})^{\mathbf{p }(\mathbf{x}^{\hspace{0.01in}\prime})\circledast\left( \int\text{d E\operatorname{e}^{\text{i}E(t-t^{\prime})}(K_{R}^{\ast})^{\pm}(\mathbf{p ,E)\right) \circledast u^{\mathbf{p}}(\mathbf{x}), \label{GreFktImpRau3 \\[0.08in] & (K_{L}^{\ast})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime})=\nonumber\\ & \qquad=\frac{1}{2\pi}\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in}\mathbf{p }(\mathbf{x})\circledast\left( \int\text{d}E\operatorname{e}^{\text{i E(t-t^{\prime})}(K_{L}^{\ast})^{\pm}(\mathbf{p},E)\right) \circledast (u^{\ast})_{\mathbf{p}}(\mathbf{x}^{\hspace{0.01in}\prime}) \label{GreFktImpRau4 \end{align} wit \begin{align} (K_{R}^{\ast})^{\pm}(\mathbf{p},E) & =\pm\hspace{0.01in}\text{i (E-\mathbf{p}^{2}/(2\hspace{0.01in}m)\pm\text{i}\varepsilon)^{-1},\nonumber\\ (K_{L}^{\ast})^{\pm}(\mathbf{p},E) & =\pm\hspace{0.01in}\text{i (E+\mathbf{p}^{2}/(2\hspace{0.01in}m)\pm\text{i}\varepsilon)^{-1}. \end{align} Immediately, we can verify that the expressions in Eqs.~(\ref{GreFktImpRau}), (\ref{GreFktImpRau2}), (\ref{GreFktImpRau3}), and (\ref{GreFktImpRau4}) satisfy the wave equations given in Eqs.~(\ref{SchrGlGreHab}) and (\ref{SchrGlGreHab2})-(\ref{SchrGlGreHabEnd}). The following applies, for example \begin{align} & (K_{L}^{\ast})^{\pm}(\mathbf{x},t;\mathbf{x}^{\hspace{0.01in}\prime \hspace{-0.01in},t^{\hspace{0.01in}\prime})\triangleleft(\partial_{t^{\prime }\text{i}-H_{0}^{\prime})=\nonumber\\ & \qquad=\frac{\pm\hspace{0.01in}\text{i}}{2\pi}\int\text{d}_{q ^{3}p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x})\circledast\lim_{\varepsilon \hspace{0.01in}\rightarrow\hspace{0.01in}0^{+}}\int\text{d}E\operatorname{e ^{\text{i}E(t-t^{\prime})}(E+\mathbf{p}^{2}/(2\hspace{0.01in}m)\pm \text{i}\varepsilon)^{-1}\nonumber\\ & \qquad\qquad\qquad\qquad\qquad\circledast(-E-\mathbf{p}^{2}/(2\hspace {0.01in}m))\circledast(u^{\ast})_{\mathbf{p}}(\mathbf{x}^{\hspace {0.01in}\prime})\nonumber\\ & \qquad=\frac{\mp\hspace{0.01in}\text{i}}{2\pi}\int\text{d E\,\operatorname{e}^{\text{i}E(t-t^{\prime})}\hspace{-0.01in}\int\text{d _{q}^{3}p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x})\circledast(u^{\ast })_{\mathbf{p}}(\mathbf{x}^{\hspace{0.01in}\prime})\nonumber\\ & \qquad=\mp\,\text{i}\operatorname*{vol}\nolimits^{-1}\hspace{-0.01in \delta(t-t^{\hspace{0.01in}\prime})\,\delta_{q}^{\hspace{0.01in}3}(\mathbf{x}\oplus (\ominus\hspace{0.01in}\kappa^{-1}\mathbf{x}^{\hspace{0.01in}\prime })).\label{DifGleProSchrGle \end{align} In the first step of the calculation in Eq.~(\ref{DifGleProSchrGle}), we made use of an identity that we have proven in Fig.~\ref{Bild82} by graphical methods.\footnote{How to apply these graphical methods see Ref.~\cite{Majid:2002kd} and the appendix of Ref.~\cite{Wachter:2019A}.} The last step in Eq.~(\ref{DifGleProSchrGle}) follows from the completeness relations in Eq.~(\ref{VolRelZeiWelDreDim1}) of Chap.~\ref{KapOrtVolEBeWel} by setting $t=0$ \begin{figure} [ptb] \centerline{\psfig{figure=Fig5.eps,width=4.0257in} \caption{Proof for the first step in Eq.~(\ref{DifGleProSchrGle}). \label{Bild82 \end{figure} \section{Expectation values of position or momentum\label{ErwOrtImpKapN}} In this chapter, we consider the expectation values of the operators for momentum and position. We calculate these expectation values for solutions to the free $q$-de\-formed Schr\"{o}\-dinger equations [cf. Eqs.~(\ref{FreParSch1N}) and (\ref{KonSchrGle}) in Chap.~\ref{LoeSchGleKap}]. We require that the solutions to the free $q$-de\-formed Schr\"{o}\-dinger equations are subject to the following normalization condition \begin{equation} 1=\frac{1}{2}\int\text{d}_{q}^{3}x\left( \phi_{L}^{\ast}(\mathbf{x ,t)\circledast\phi_{R}(\mathbf{x},t)+\phi_{L}(\mathbf{x},t)\circledast\phi _{R}^{\ast}(\mathbf{x},t)\right) . \label{NorBed \end{equation} This condition is equivalent t \begin{equation} 1=\frac{1}{2}\int\text{d}_{q}^{3}\hspace{0.01in}p\left( (c^{\ast })_{\mathbf{p}}\circledast c_{\hspace{0.01in}\mathbf{p}}+c^{\hspace {0.01in}\mathbf{p}}\circledast(c^{\ast})^{\mathbf{p}}\right) . \label{NorBedImp \end{equation} You can see this by inserting the expressions of Eqs.~(\ref{EntWicEbeWelDreDim1}) and (\ref{EntWicEbeWelDreDim2}) into Eq.~(\ref{NorBed}) and proceeding in the following manner \begin{align} & \int\text{d}_{q}^{3}x\,\phi_{L}^{\ast}(\mathbf{x},t)\circledast\phi _{R}(\mathbf{x},t)=\nonumber\\ & \qquad=\int\text{d}_{q}^{3}p\int\text{d}_{q}^{3}p^{\prime}(c^{\ast })_{\mathbf{p}}\circledast\hspace{-0.02in}\int\text{d}_{q}^{3}x\,(u^{\ast})_{\mathbf{p }(\mathbf{x},t)\circledast u_{\hspace{0.01in}\mathbf{p}^{\prime} (\mathbf{x},t)\circledast c_{\hspace{0.01in}\mathbf{p}^{\prime}}\nonumber\\ & \qquad=\int\text{d}_{q}^{3}p\int\text{d}_{q}^{3}p^{\prime}(c^{\ast })_{\mathbf{p}}\circledast\operatorname*{vol}\nolimits^{-1}\hspace {-0.01in}\delta_{q}^{\hspace{0.01in}3}((\ominus\hspace{0.01in}\kappa^{-1}\mathbf{p) \oplus\mathbf{p}^{\hspace{0.01in}\prime})\circledast c_{\hspace{0.01in \mathbf{p}^{\prime}}\nonumber\\ & \qquad=\int\text{d}_{q}^{3}p\,(c^{\ast})_{\mathbf{p}}\circledast c_{\hspace{0.01in}\mathbf{p}}. \label{UmNorBedImp \end{align} Note that the last two steps of the above calculation follow from Eqs.~(\ref{OrtRelEbeWel0Schr}) and (\ref{AlgChaIdeqDelFkt}) of Chap.~\ref{KapOrtVolEBeWel}. We continue with the expectation value of the momentum operator. We determine this expectation value in position space as well as in momentum space \begin{align} \langle P^{A}\rangle_{\phi} & =\frac{1}{2}\int\text{d}_{q}^{3}x\left( \phi_{L}^{\ast}(\mathbf{x},t)\circledast\text{i}^{-1}\partial^{A \triangleright\phi_{R}(\mathbf{x},t)+\phi_{L}(\mathbf{x},t)\circledast \text{i}^{-1}\partial^{A}\,\bar{\triangleright}\,\phi_{R}^{\ast (\mathbf{x},t)\right) \nonumber\\ & =\frac{1}{2}\int\text{d}_{q}^{3}p\left( (c^{\ast})_{\mathbf{p}}\circledast p^{A}\circledast c_{\hspace{0.01in}\mathbf{p}}+c^{\hspace{0.01in}\mathbf{p }\circledast p^{A}\circledast(c^{\ast})^{\mathbf{p}}\right) . \label{ErwImpSchTeiFre1 \end{align} To obtain the expression in momentum space from that in position space, we proceed as follows \begin{align} & \int\text{d}_{q}^{3}x\,\phi_{L}^{\ast}(\mathbf{x},t)\circledast \text{i}^{-1}\partial^{A}\triangleright\phi_{R}(\mathbf{x},t)=\nonumber\\ & \qquad=\int\text{d}_{q}^{3}p\int\text{d}_{q}^{3}p^{\prime}(c^{\ast })_{\mathbf{p}}\circledast\int\text{d}_{q}^{3}x\,(u^{\ast})_{\mathbf{p }(\mathbf{x},t)\circledast u_{\hspace{0.01in}\mathbf{p}^{\prime} (\mathbf{x},t)\circledast p^{A}\circledast c_{\hspace{0.01in}\mathbf{p ^{\prime}}\nonumber\\ & \qquad=\int\text{d}_{q}^{3}p\int\text{d}_{q}^{3}p^{\prime}(c^{\ast })_{\mathbf{p}}\circledast\operatorname*{vol}\nolimits^{-1}\hspace {-0.01in}\delta_{q}^{\hspace{0.01in}3}((\ominus\hspace{0.01in}\kappa^{-1}\mathbf{p) \oplus\mathbf{p}^{\hspace{0.01in}\prime})\circledast p^{A}\circledast c_{\hspace{0.01in}\mathbf{p}^{\prime}}\nonumber\\ & \qquad=\int\text{d}_{q}^{3}p\,(c^{\ast})_{\mathbf{p}}\circledast p^{A}\circledast c_{\hspace{0.01in}\mathbf{p}}. \label{UmErwImp \end{align} In the first step of the above calculation, we used the series expansion in terms of $q$-de\-formed plane waves [see Eq.~(\ref{EntWicEbeWelDreDim1}) in the previous chapter] and applied the eigenvalue equations for $q$-de\-formed plane waves [see Eq.~(\ref{ImpEigWelSol}) in Chap.~\ref{LoeSchGleKap}] \begin{align} \text{i}^{-1}\partial^{A}\triangleright\phi_{R}(\mathbf{x},t) & =\int\text{d}_{q}^{3}p\,\text{i}^{-1}\partial^{A}\triangleright u_{\hspace {0.01in}\mathbf{p}}(\mathbf{x},t)\circledast c_{\hspace{0.01in}\mathbf{p }\nonumber\\ & =\int\text{d}_{q}^{3}p\,u_{\hspace{0.01in}\mathbf{p}}(\mathbf{x ,t)\circledast p^{A}\circledast c_{\hspace{0.01in}\mathbf{p}}. \end{align} The further steps in Eq.~(\ref{UmErwImp}) correspond to those in Eq.~(\ref{UmNorBedImp}). The expectation value of the momentum operator behaves as follows under conjugation \begin{equation} \overline{\langle P^{A}\rangle_{\phi}}=\langle P_{A}\rangle_{\phi}. \end{equation} This identity follows from the last expression in Eq.~(\ref{ErwImpSchTeiFre1}) if we take into account Eq.~(\ref{KonBedEntKoe}) of Chap.~\ref{KapOrtVolEBeWel} as well as the conjugation properties of momentum coordinates, $q$-integral, and star product [cf. Eq.~(\ref{KonEigSteProFkt}) in Chap.~\ref{KapQuaZeiEle} and Eq.~(\ref{KonEigVolInt}) in Chap.~\ref{KapParDer}] \begin{equation} \overline{\int\text{d}_{q}^{3}p\,(c^{\ast})_{\mathbf{p}}\circledast p^{A}\circledast c_{\hspace{0.01in}\mathbf{p}}}=\int\text{d}_{q ^{3}p\,c^{\hspace{0.01in}\mathbf{p}}\circledast p_{A}\circledast(c^{\ast })^{\mathbf{p}}. \end{equation} We can also write down expressions for the expectation value of the position operator. Concretely, we hav \begin{align} \langle X^{A}\rangle_{\phi} & =\frac{1}{2}\int\text{d}_{q}^{3}x\left( \phi_{L}^{\ast}(\mathbf{x},t)\circledast x^{A}\circledast\phi_{R (\mathbf{x},t)+\phi_{L}(\mathbf{x},t)\circledast x^{A}\circledast\phi _{R}^{\ast}(\mathbf{x},t)\right) \nonumber\\ & =\frac{1}{2}\int\text{d}_{q}^{3}p\left( (c^{\ast})_{\mathbf{p }(t)\circledast\text{i}\partial_{p}^{A}\,\bar{\triangleright}\,c_{\hspace {0.01in}\mathbf{p}}(t)+c^{\hspace{0.01in}\mathbf{p}}(t)\circledast \text{i}\partial_{p}^{A}\triangleright(c^{\ast})^{\mathbf{p}}(t)\right) \label{ErwOrtSchTeiFre1 \end{align} wit \begin{align} c_{\hspace{0.01in}\mathbf{p}}(t) & =\exp\left( -\frac{\text{i t\hspace{0.01in}\mathbf{p}^{2}}{2\hspace{0.01in}m}\right) \circledast c_{\hspace{0.01in}\mathbf{p}}, & (c^{\ast})_{\mathbf{p}}(t) & =(c^{\ast })_{\mathbf{p}}\circledast\exp\left( \frac{\text{i}t\hspace{0.01in \mathbf{p}^{2}}{2\hspace{0.01in}m}\right) ,\nonumber\\ c^{\hspace{0.01in}\mathbf{p}}(t) & =c^{\hspace{0.01in}\mathbf{p} \circledast\exp\left( \frac{\text{i}t\hspace{0.01in}\mathbf{p}^{2} {2\hspace{0.01in}m}\right) , & (c^{\ast})^{\mathbf{p}}(t) & =\exp\left( -\frac{\text{i}t\hspace{0.01in}\mathbf{p}^{2}}{2\hspace{0.01in}m}\right) \circledast(c^{\ast})^{\mathbf{p}}. \label{ZeiAbhDKoe \end{align} To derive the last expression in Eq.~(\ref{ErwOrtSchTeiFre1})\ from that in position space, we use the series expansion in terms of $q$-de\-formed plane waves together with the eigenvalue equation \begin{align} x^{A}\circledast\exp_{q}(\mathbf{x}|\text{i}\mathbf{p}) & =\exp _{q}(\mathbf{x}|\text{i}\mathbf{p})\,\bar{\triangleleft}\,\partial_{p ^{A}\text{i},\nonumber\\ \text{i}\partial_{p}^{A}\triangleright\exp_{q}(\text{i}^{-1}\mathbf{p |\mathbf{x}) & =\exp_{q}(\text{i}^{-1}\mathbf{p}|\mathbf{x})\circledast x^{A}, \end{align} an \begin{align} \exp_{q}^{\ast}(\text{i}\mathbf{p}|\mathbf{x})\circledast x^{A} & =\text{i}\partial_{p}^{A}\,\bar{\triangleright}\,\exp_{q}^{\ast (\text{i}\mathbf{p}|\mathbf{x}),\nonumber\\ \exp_{q}^{\ast}(\mathbf{x}|\text{i}^{-1}\mathbf{p})\triangleleft\partial _{p}^{A}\text{i} & =x^{A}\circledast\exp_{q}^{\ast}(\mathbf{x}|\text{i ^{-1}\mathbf{p}). \end{align} So, we can proceed similarly as in Eq.~(\ref{UmErwImp}) \begin{align} & \int\text{d}_{q}^{3}x\,\phi_{L}^{\ast}(\mathbf{x},t)\circledast x^{A}\circledast\phi_{R}(\mathbf{x},t)=\nonumber\\ & \qquad=\int\text{d}_{q}^{3}p\int\text{d}_{q}^{3}p^{\prime}(c^{\ast })_{\mathbf{p}}(t)\circledast\hspace{-0.02in}\int\text{d}_{q}^{3}x\,\text{i}\partial_{p ^{A}\,\bar{\triangleright}\,(u^{\ast})_{\mathbf{p}}(\mathbf{x},t)\circledast u_{\hspace{0.01in}\mathbf{p}^{\prime}}(\mathbf{x},t)\circledast c_{\hspace {0.01in}\mathbf{p}^{\prime}}(t)\nonumber\\ & \qquad=\int\text{d}_{q}^{3}p\,(c^{\ast})_{\mathbf{p}}(t)\circledast \text{i}\partial_{p}^{A}\,\bar{\triangleright}\,\operatorname*{vol \nolimits^{-1}\hspace{-0.02in}\int\text{d}_{q}^{3}p^{\prime}\delta_{q ^{\hspace{0.01in}3}((\ominus\hspace{0.01in}\kappa^{-1}\mathbf{p)}\oplus\mathbf{p ^{\hspace{0.01in}\prime})\circledast c_{\hspace{0.01in}\mathbf{p}^{\prime }(t)\nonumber\\ & \qquad=\int\text{d}_{q}^{3}p\,(c^{\ast})_{\mathbf{p}}(t)\circledast \text{i}\partial_{p}^{A}\,\bar{\triangleright}\,c_{\hspace{0.01in}\mathbf{p }(t). \end{align} For the sake of completeness, we note that the expectation value $\langle X^{A}\rangle_{\phi}$ behaves under conjugation as follows \begin{equation} \overline{\langle X^{A}\rangle_{\phi}}=\langle X_{A}\rangle_{\phi}. \end{equation} This identity follows from the expression for $\langle X^{A}\rangle_{\phi}$ in position space if we take into account Eq.~(\ref{VerKonWelFkt}) in Chap.~\ref{LoeSchGleKap} as well as the conjugation properties of spatial coordinates, $q$-integral, and star product. Also note that the expectation value $\langle X^{A}\rangle_{\phi}$ is usually not time-independent, unlike the expectation value $\langle P^{A}\rangle _{\phi}$. You can see this from the expressions for $\langle X^{A \rangle_{\phi}$ and $\langle P^{A}\rangle_{\phi}$ in momentum space [cf. Eqs.~(\ref{ErwImpSchTeiFre1}) and (\ref{ErwOrtSchTeiFre1})]. {\normalsize \bibliographystyle{abbrv}
proofpile-arXiv_059-15740
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} The carbon monoxide (CO) is the second most abundant diatomic molecule (after H$_2$) of the Universe, and, hence, the detailed data on its spectral properties are indispensably required to solve many fundamental and practical problems concerning the CO molecule. The $a^3\Pi - X^1\Sigma^+$ Cameron system of CO~\cite{Cameron1926}, lying in the ultraviolet region (170-270 nm), connects the ground singlet $X^1\Sigma^+$ state with the lowest excited $a^3\Pi$ state (see Fig.~\ref{Fig_PEC}). The upper triplet state is a metastable~\cite{Fournier, James, tauCO2007} since the spin-forbidden $a^3\Pi - X^1\Sigma^+$ electronic transition is extremely weak (its oscillator strength is only about 10$^{-7}$-10$^{-8}$~\cite{James, Minaev1995}). Nevertheless, the Cameron bands are well observed in both absorption and emission spectra due to the regular spin-orbit coupling of the $a^3\Pi$ state with the remote singlet states manifold~\cite{James_int}. The rovibronic, fine and hyperfine structure as well as radiative, collisional, magnetic and electric properties of the ground and excited electronic states of the CO molecule have been comprehensively studied in a huge number of both experimental and theoretical works (see, for instance, the textbook~\cite{Field2004book} and references therein). However, it is not totally the case for the extremely weak $a^3\Pi - X^1\Sigma^+$ transition in spite of the a long time experimental efforts devoted to radiative lifetime determination of the $a^3\Pi_{\Omega^{\pm}=0,1,2}$ substates~\cite{Fournier, James, tauCO2007, tauCO1999, tauCO2000} as well as to intensity measurements for the band structure of the $a^3\Pi - X^1\Sigma^+$ transition~\cite{James_int}. The intercombination (triplet-singlet) Cameron system has never been studied under fully relativistic approximation combined with state-of-art electron correlation treatment. To the best of our knowledge, \emph{ab initio} studies of the $a^3\Pi - X^1\Sigma^+$ transition probabilities are limited so far by the multi-configurational self-consistent field (MCSCF) quadratic response approach~\cite{Minaev1995} or by using the spin-orbit coupling perturbation theory~\cite{Fournier, James, tauCO2007}. \begin{figure}[t!] \includegraphics[scale=0.4]{Fig1.eps} \caption{Scheme of the low-lying electronic states of the CO molecule constructed from the empirical Rydberg-Klein-Rees potentials borrowed from Refs.~\cite{RKR_CO} (open symbols) and the present relativistic potential energy curves (solid symbols and line; $\Omega=0^+,1$ -- FS-RCC and $\Omega=0^-$ -- RCI) corresponding to the pure Hund's "\textbf{c}" coupling case.}\label{Fig_PEC} \end{figure} Due to high sensitivity of the rovibronic energies of molecular lines to nuclear and electron masses, the high-redshift quasar absorption spectra of H$_2$ and CO molecules~\cite{Varshalovich, Ivanchik} are basically used to probe a temporal variation of the proton-to-electron mass ratio ($m_p/m_e$)~\cite{Ubachs2016}. In particular, the Cameron bands of the CO molecule were identified in a quasar spectrum QSO 1556+3517 with the redshift $z=1.48$~\cite{Dubrovich1999}. The unprecedently accurate spectroscopic measurement on the $a^3\Pi - X^1\Sigma^+(0,0)$ band of various isotopomers of CO confirms the extreme sensitivity of electronic transitions involving nearly degenerate rovibronic levels for probing the variation of $m_p/m_e$ on the laboratory time scale~\cite{Ubachs2011}. Search for a presumable drift of the fine structure constant $\alpha=e^2/\hbar c$ is contrarily investigated via the atomic lines measurement~\cite{Murphy}. At the same time, the impact of the $\alpha$ change on the spin-orbit splitting and the intercombination $a^3\Pi - X^1\Sigma^+$ transition probabilities of CO should be apparently expected, thus, allowing to consider simultaneously both relativistic and mass effects. For the experimental verification of the possible drift of the fundamental constants the sensitivity coefficients for the wavelengths and intensities of the corresponding molecular transitions to variations of the parameters $m_p/m_e$ and $\alpha$ ($K^{\mu}$ and $K^{\alpha}$) are crucial~\cite{Meshkov2006}. The impact of scalar relativistic effects on the spectral characteristics of the isolated $X^1\Sigma^+$ state of the CO molecule has been recently studied~\cite{Konovalova2018}. The highly accurate $K^{\mu}$ and $K^{\alpha}$ estimates for most electronic transitions in CO is a non-trivial task which indispensably requires the quantum modeling beyond the conventional non-relativistic and adiabatic (Born-Oppenheimer) approximations~\cite{Bernath2005book}. Therefore, the purposes of this work were to obtain the most reliable estimates for the very weak $a^3\Pi - X^1\Sigma^+$ transition probabilities and radiative lifetimes of the metastable $a^3\Pi$ state of the CO molecule as well as to investigate the sensitivity of transition strength of the CO Comeron system to a presumable drift of the fine structure constant, $\alpha$, on the cosmological time scale. \section{Computational machinery} The spin-forbidden $a^3\Pi_{\Omega=0^+,1} - X^1\Sigma^+$ transition dipole moments (TDMs) of the CO molecule, as functions of the internuclear distance, $R$, were calculated \emph{ab initio} by means of three computational methods taking into account relativistic and electron-correlation effects in a different manner (see sections~\ref{GRPP}-\ref{MRCI} for details). \emph{A priori} most accurate results were obtained in the framework of fully relativistic multi-reference relativistic Fock space coupled cluster calculations (FS-RCC) (section~\ref{FSRCC}) which utilized the advantages of the finite-field approach to transition property evaluation and generalized relativistic pseudo-potentials (GRPPs) to simulate relativistic structure of both C and O atoms. It should be noticed that the originally constructed GRPPs (section~\ref{GRPP}) allow one to treat the Coulomb interactions of \emph{all} electrons of the both atoms explicitly (so-called ``empty-core'' GRPPs). Under the conventional non-relativistic approximation (with the point nuclear model) the corresponding GRPP of the light atoms vanish. The fully relativistic $a^3\Pi_{0^+,1} - X^1\Sigma^+$ TDM functions were also evaluated using the large scale multi-reference configuration interaction (MRCI) method employing the Dirac--Coulomb electronic Hamiltonian with the incorporation of Gaunt interactions at the spinor generation stage and the so-called ``exact'' (X2C) transformation to the two-component picture~\cite{Ilias:07} (section ~\ref{RMRCI}). To monitor the MRCI energy and wavefunction convergence the various compositions of and active spaces were considered. The relativistic MRCI calculation (further referred as to RCI) performed with the DIRAC package~\cite{DIRAC:19} was found to be rather time consuming process. Moreover, its convergence strongly depends on the particular active space choice and is frequently affected by numerical instabilities. Alternatively, the spin-orbit interaction between $a^3\Pi$ and the low-lying singlet states manifold of the CO molecule has been approximately accounted in a perturbation manner (section~\ref{MRCI}) using the \emph{scalar-state-interaction} (SSI) method, which is based on a diagonalization of the entire electronic Hamiltonian $\hat{H}_{sr}$+$\hat{H}_{so}$ built in a limited basis of the eigenfunctions of the scalar-relativistic Hamiltonian $\hat{H}_{sr}$. The required eigenvalues and eigenfunctions of the scalar-relativistic electronic Hamiltonian $\hat{H}_{sr}$ were obtained by means of the internally contracted MRCI method implemented in the MOLPRO software~\cite{MOLPRO2012}. The resulting \emph{ab initio} $a^3\Pi_i - X^1\Sigma^+$ transition dipole moments, $d_{a^3\Pi_i - X}(R)$, and the corresponding difference potentials, $U_{a^3\Pi_i}(R) - U_X(R)$, were then applied for the radiative lifetime $\tau(\Omega^{\pm},v^{\prime},J^{\prime})$ estimates of the fine structure $\Omega=0,1,2$-components of the $a^3\Pi$ state as a function of vibrational $v^{\prime}$ and rotational $J^{\prime}$ quantum numbers. The required multi-component (non-adiabatic) vibrational wave functions of the $a^3\Pi_{\Omega^{\pm}}$ substates were obtained during rigorous close-coupled (CC) calculations (section~\ref{rtau}) which accounted explicitly for the spin-orbit splitting of the triplet state as well as the spin-rotational interaction between its $\Omega$-components~\cite{Klemperer}. The dependence of the electronic excitation energies, $\Delta U_{\Omega-X}(R)=U_\Omega (R) - U_{X0^+}(R)$, and the corresponding electric dipole moments, $d_{\Omega-X}(R)$, for two lowest $\Delta \Omega =\pm 1$ relativistic transitions in the CO molecule, $(1,2)\Omega-X0^+$, on the variation of the fine structure parameter $\alpha$, is usually expressed in terms of the dimensionless sensitivity coefficients \begin{equation}\label{Kalpha} K^{\alpha}_{f}=\frac{\partial f}{\partial \alpha}\cdot\frac{\alpha}{f} =\frac{\partial \ln (f)}{\partial \ln (\alpha)}, \end{equation} where $f$ stands for the excitation energy ($\Delta U$) or absolute value of transition moment ($d$). These quantities were evaluated within the finite-difference approximation, changing the speed of light in the FS-RCC calculations from the standard value $c=137.03599911$~a.u. to $c_{-}=c/\sqrt{1.1}$ and $c_{+}=c/\sqrt{0.9}$. This was performed \emph{via} constructing two additional pseudopotentials for each atom with the modified speeds of light, $c_{-}$ and $c_{+}$. \subsection{Empty-core GRPP for light elements}\label{GRPP} The scheme of the generalized relativistic core effective potential generation developed for heavier elements in Refs.~\cite{Tupitsyn:95, Mosyagin:97, Petrov:04b, Mosyagin:06amin} was applied here with minor modifications. The theoretical background of the latter approach can be found in Ref.~\cite{Titov:99} and the latest versions are reviewed in Refs.~\cite{Mosyagin:16, Mosyagin:17, Mosyagin:20a}. In particular, the fully-relativistic Dirac-Fock-Breit (DFB) calculation was used to obtain the four-component spinors and their energies for the model state. Then, non-relativistic-type Hartree-Fock equations in the {\it jj}-coupling scheme were inverted to derive the GRPP components (potentials). Thus, these components effectively took into account for the relativistic effects. The question immediately arises about the accuracy of the resulting model. The transition energies between low-lying states of the C atom and its cation calculated with the help of different methods are listed in Table~\ref{C_trans}. The Dirac-Fock-Breit results (with the Fermi nuclear charge distribution model) tabulated in the second column are used as the reference values. The errors of their reproducing in other calculations are listed in the following columns. The distinctions due to the point nuclear model and perturbative accounting for Breit interactions are negligible in comparison with the relativistic effects and are not presented in the Table (they are just zero at the level of the accuracy used in the Table). The errors of the Dirac-Fock (DF) calculations without Breit interactions and with the speed of light enlarged 1000 times, i.e. with the effect of relativity practically switched off (``HF'' in the Table~\ref{C_trans}) demonstrate the contributions of Breit and relativistic effects which are going to be simulated by the GRPP. One can see that the errors of the GRPP model (the 5th column) are more than one order of magnitude lower than the contributions of the relativistic effects (in the 4th column) and a few times lower than the contribution of Breit effects (the 3th column). The constructed numerical potentials were replaced by their Gaussian approximations without any detectable loss of accuracy. Finally, the errors of semilocal Valence and Core versions derived from the above (Full) GRPP by neglecting the difference between the potentials for the $1s$ and $2s$ spinors are listed in the two last columns. One can see that the Valence GRPP version (with the potentials optimized for reproducing $2s$ and $2p$ spinors) is still acceptable for simulating the relativistic effects. Being compatible with most codes for relativistic electronic structure modelling, this version provides a useful (and, at the time being, unique) tool offering the possibility of describing Breit interactions in molecular all-electron calculations. This seems of particular importance for light-element compounds where the Breit interaction frequently have non-negligible contribution in contrast to rather weak spin-dependent relativistic effects. It should be noted that the potential acting on the $s$-electrons in the Valence version was constructed for the $2s$ spinor with large component having a radial node (unlike $1s$ for the Core version). \begin{table*} \caption{Transition energies ($\Delta E$) between some relativistic terms and states averaged over the non-relativistic configurations of the C atom and its cation from numerical DFB calculations and the corresponding absolute errors of their reproducing in the different versions of DF and GRPP calculations. All the values are in cm$^{-1}$.}\label{C_trans} \begin{tabular}{lrrrrrr} \hline\hline Configuration & DFB & DF & HF &\multicolumn{3}{c}{GRPP}\\ \cline{5-7} (Term) & & & & Full & Val.\ & Core \\ \hline & $\Delta E$ & Error & Error & Error & Error & Error \\ \hline \multicolumn{7}{l}{Nonrel.aver.\ $1s^2 2s^2 2p^2 \rightarrow$} \\ $1s^2 2s^2 2p^1 3s^1 $ & 52415 & 4 & 47 & 1 & -1 & 24 \\ $1s^2 2s^2 2p^1 $ & 80611 & 5 & 40 & 0 & -1 & 20 \\ $1s^2 2s^2 3s^1 $ & 195398 & 13 & 105 & 0 & -5 & 55 \\ $1s^2 2s^1 2p^3 $ & 70858 & 7 & -114 & 1 & 4 & -57 \\ $1s^2 2s^1 2p^2 3s^1 $ & 117676 & 10 & -85 & 3 & 4 & -43 \\ $1s^2 2s^1 2p^2 $ & 145447 & 11 & -93 & 2 & 4 & -47 \\ $1s^2 2s^1 2p^1 3s^1 $ & 250149 & 19 & -49 & 3 & 1 & -24 \\ $1s^2 2p^3 $ & 233835 & 17 & -234 & 1 & 8 & -119 \\ \hline \multicolumn{7}{l}{Rel.term $\ldots 2p_{1/2}^1 2p_{3/2}^1 (J=1) \rightarrow$} \\ $\ldots 2p_{3/2}^2 (J=2) $ & 4288 & 14 & -43 & 4 & 4 & 5 \\ $\ldots 2p_{1/2}^1 2p_{3/2}^1 (J=2) $ & 8429 & 6 & 7 & 6 & 6 & 7 \\ $\ldots 2p_{1/2}^2 (J=0) $ & 10463 & -10 & 49 & 1 & 1 & 2 \\ $\ldots 2p_{3/2}^2 (J=0) $ & 20726 & 9 & -43 & -2 & -2 & 1 \\ \hline \end{tabular} \end{table*} \subsection{Relativistic coupled cluster calculations}\label{FSRCC} The Fock space relativistic coupled cluster \cite{Visscher:01} calculations for the CO molecule with the series of Valence empty-core GRPPs described above employed the standard \emph{aug-cc-pVQZ} basis sets~\cite{Dunning:89, Kendall:92} on both centers. The set of one-electron spinors and the Fermi vacuum state were obtained by solving the spin--orbit-coupled Kramers-restricted SCF equations for the ground-state configuration of the neutral CO molecule. Excited states were described within the one hole -- one particle ($1h1p$) Fock space sector; the ($1h1p$) model space was spanned by all singly excited configurations with a hole on one of 8 highest-energy occupied spinors and a particle on one of 24 lowest-energy virtual spinors. To prevent numerical instabilities in solving the FS-RCC amplitude equations, we used the adjustable denominator shift technique~\cite{Zaitsevskii:18a} in the form described in Ref.~\cite{Oleynichenko:20cpl} (``complex shift simulation''). In order to maintain core separability of results, no shifting has been applied in the vacuum ($0h0p$) sector. We also kept unshifted the denominators in the equations for the single de-excitation amplitudes in the ($1h1p$) sector. In the other sectors the shift amplitudes -0.15~\emph{a.u.} for single excitations and -0.30~\emph{a.u}.for double excitations with the attenuation parameter $m=3$ (see Eqs.(7--8) of Ref.~\cite{Oleynichenko:20cpl}) were assumed. Transition dipole moments $D_{ij}$ between the electronic states $\psi_i,\;\psi_j$ of CO were calculated with the help of the finite-field technique~\cite{Zaitsevskii:18, Zaitsevskii:20t}, using the approximate relation \begin{equation}\label{basicff} |D_{ij}|\approx\Delta E_{ij} \left|\langle \tilde{\psi}_i^{\perp\!\!\perp} \vert \frac{\partial}{\partial{}F} \tilde{\psi}_j \rangle \right|^{1/2 \left|\langle \tilde{\psi}_j^{\perp\!\!\perp} \vert \frac{\partial}{\partial{}F} \tilde{\psi}_i \rangle \right|^{1/2} \end{equation} where $F$ is the strength of the applied uniform electric field, $\Delta E_{ij}$ stands for the absolute value of the $i\to j$ transition energy and $\{\tilde{\psi}_j\}$ are the (normalized) projections of the field-dependent electronic state wavefunctions $\psi_j$ onto the direct sum of the vacuum state and the ($1h1p$) model space, $\mathcal{L}^{(0h0p+1h1p)}$, which is constructed for $F=0$ and does not depend on $F$. The set of functions $\{\tilde{\psi}_i^{\perp\!\!\perp}\}$ is biorthogonal to the basis in $\mathcal{L}^{(0h0p+1h1p)}$ composed of $\{ \tilde{\psi}_j\}$: \begin{equation}\label{biorth} \tilde{\psi}_i^{\perp\!\!\perp}=S^{-1}{\psi}_i \end{equation} where $S$ is the overlap matrix: $S_{ij}= \langle \tilde{\psi}_i \vert \tilde{\psi}_j \rangle$. The derivatives in Eq.(\ref{basicff}) are to be evaluated at $F=0$ and are estimated within the central finite-difference approximation, using the results of calculations with two different field strength values. In the present calculations the step of numerical differentiation was assumed to be equal to 0.00005 or 0.0001 \emph{a.u.}; the results obtained with these two step sizes were practically identical. Although this version of the finite-field technique is based exclusively on the analysis of model-space entities, the contributions to $D_{ij}$ from the parts of wavefunctions outside $\mathcal{L}^{(0h0p+1h1p)}$ are incorporated implicitly~\cite{Zaitsevskii:18, Zaitsevskii:98}. It is worth noting that, in contrast with the complete-model-space case~\cite{Zaitsevskii:18}, the projections $\{\tilde{\psi}_{i}\}$ are not directly obtained as eigenfunctions of the FS-RCC effective Hamiltonian. However, these eigenfunction can be readily transformed into $\{\tilde{\psi}_{i}\}$, using the ``closed'' (i.\ e. acting within $\mathcal{L}^{(0h0p+1h1p)}$) part of the cluster operator~\cite{Zaitsevskii:20t}. The required one-electron spinors and molecular integrals were evaluated using the DIRAC19 program package~\cite{DIRAC:19, DIRAC:20}; the FS-RCC calculations were carried out with the help of the EXP-T code~\cite{Oleynichenko:20a, Oleynichenko:20, website:expt}. \subsection{Relativistic configuration interaction calculations}\label{RMRCI} The relativistic MRCI calculations employed the uncontracted version of the all-electron correlation consistent \emph{aug-cc-pCVQZ} Dunning basis set~\cite{Woon:95}. At the first step of calculations, four-component calculations for the CO$^+$ cation were performed with the Dirac--Coulomb--Gaunt Hamiltonian to generate the set of one-electron spinors suitable for the description of excited electronic states of the neutral molecule. Then the four-component Dirac-Coulomb Hamiltonian was transformed into a two-component one, accurately reproducing the positive-energy spectrum of the original four-component Hamiltonian (X2C Hamiltonian~\cite{Ilias:07}) which was used to perform correlation calculations in the frame of the MRCI method~\cite{Fleig:03}. The correlations of 8 or 10 electrons of the CO molecule were taken into account, disregarding or accounting for the excitation of 2$s$ electrons of the O atom, whereas the excitations of 1$s$ electrons of C and O were always omitted. The configuration space included the excitations of up to two electrons from the complete-active-space spinors corresponding to the $4\sigma-6\sigma$, $1\pi,\;2\pi$ or $3\sigma-6\sigma$, $1\pi,\;2\pi$ scalar MOs. \subsection{Scalar-state-interaction calculations}\label{MRCI} The calculation of individual electronic matrix elements of the spin-orbit operator $\hat{H}_{so}$ and transition dipole moment $\hat{d}$ between the scalar states, construction and diagonalization of the entire $\hat{H}_{sr}$ + $\hat{H}_{so}$ matrix were accomplished within the HLSMAT procedure~\cite{SO} incorporated into the MOLPRO package. Similar to the case of FS-RCC calculations described above, empty-core GRPPs and the all-electron~\emph{aug-cc-pV5Z} basis sets~\cite{Kendall:92} were used for both atoms. The initial Hartree-Fock molecular orbitals (MOs) were optimized within the state-averaged complete active space self-consistent field (SA-CASSCF) method~\cite{Werner85}, taking the (1-4)$^{1,3}\Sigma^+$ and (1-4)$^{1,3}\Pi$ electronic states with equal weights. The dynamic correlation was accounted for all 14 electrons of the molecule within the internally contracted MRCISD method~\cite{iMRCI}. The active space used in the SA-CASSCF calculation consisted of six $\sigma$ and four $\pi$ MOs while in the subsequent MRCISD calculation of (1-2)$^3\Pi$, (1-3)$^1\Sigma^+$ and (1-3)$^1\Pi$ states two inner core $\sigma$ orbitals were kept doubly occupied. \subsection{Radiative lifetime calculations}\label{rtau} The rovibronic eigenvalues and eigenfunctions for lowest rovibrational levels of the fine-structure components ($\Omega^{\pm}$, $v^{\prime}$, $J^{\prime}$) of the $a^3\Pi_{\Omega^{\pm}}$ state were obtained by solving the three close-coupled (CC) equations~\cite{DUO}: \begin{eqnarray}\label{CC} \left(- {\bf I}\frac{\hbar^2 d^2}{2\mu dR^2} + {\bf V}(R;\mu,J^{\prime}) - {\bf I}E^{CC}\right)\mathbf{\Phi}(R) = 0 \end{eqnarray} with the conventional boundary $\phi_i(0)=\phi_i(\infty)=0$ and normalization $\sum_{i=1}P_i=1$ conditions, where $i\in[a^3\Pi_0;a^3\Pi_1;a^3\Pi_2]$ and $P_i=\langle\phi_i|\phi_i\rangle$ is the fractional $\Omega$-partition of the triplet state. The corresponding potential energy matrix ${\bf V}(R;\mu,J^{\prime})$ was taken in the form: \begin{eqnarray}\label{Ham} \langle ^{3}\Pi_{0}|H|^{3}\Pi_{0} \rangle & = & U_{a} - A^{so} + B(X+1),\nonumber \\ \langle ^{3}\Pi_{1}|H|^{3}\Pi_{1} \rangle & = & U_{a} + B(X+1),\nonumber \\ \langle ^{3}\Pi_{2}|H|^{3}\Pi_{2} \rangle & = & U_{a} + A^{so} + B(X-3),\nonumber \\ \langle ^{3}\Pi_{0}|H|^{3}\Pi_{1} \rangle & = & - B\sqrt{2X},\nonumber \\ \langle ^{3}\Pi_{1}|H|^{3}\Pi_{2} \rangle & = & - B\sqrt{2(X-2)},\nonumber \end{eqnarray} where $X=J(J+1)$, $B(R) = \hbar^{2}/2 \mu R^{2}$ while $U_a(R)$ is the empirical Rydberg-Klein-Rees (RKR) potential of the $a^{3}\Pi_1$ component~\cite{RKR_CO} and $A^{so}(R)$ is the spin-orbit splitting available in Ref.~\cite{James} as a polynomial function of $R$. Since the $0^- - 0^+$ and $2 - 0^+$ transitions are strictly forbidden under relativistic approximation only the $a^3\Pi_{0^+,1} - X^1\Sigma^+_{0^+}$ transitions contribute to the radiative lifetime $\tau (\Omega^{\pm}, v^{\prime}, J^{\prime})$ of the $a^3\Pi$ state: \begin{equation}\label{tau} \frac{1}{\tau} = \frac{k}{2J^{\prime}+1}\sum_{v_X,J_X}\hspace{-0.5em}\nu^3_{a-X} \left|\sum_{\Omega^{\prime}\in 0^+,1}\hspace{-0.7em}\langle \phi_{^3\Pi_{\Omega^{\prime}}}|d_{^3\Pi_{\Omega^{\prime}}-X}|v_X\rangle S^{\Omega^{\prime} \Omega_X}_{J^{\prime} J_X}\right|^2, \end{equation} where $k=2.02586232\times 10^{-6}$, $\nu_{a-X}=E^{CC}(\Omega^{\pm},J^{\prime})-E_X(v_X,J_X)$ is the transition wavenumber (in cm$^{-1}$), $d_{^3\Pi_{\Omega^{\prime}}-X}(R)$ are the transition dipole moments (in $a.u.$) and $S^{\Omega^{\prime} \Omega_X}_{J^{\prime} J_X}$ are the properly normalized dimensionless direction cosine matrix elements well-known in the analytical form ~\cite{Bernath2005book}. It is clear that the $a^3\Pi_{0^+} - X^1\Sigma^+_{0^+}$ transition does not make matter for the $\Omega^{-}$-components of the $a$-state. To avoid a summation over rotational and vibrational levels of the ground state in Eq. (\ref{tau}), the approximate sum rule~\cite{Tellinghuisen:84,PupyshevCPL94} generalized for the CC case~\cite{Kiyoshima2003} can be used as \begin{eqnarray}\label{tausum} \frac{1}{\tau_a} \approx k\hspace{-0.5em} \sum_{\Omega^{\prime}\in 0^+,1}\hspace{-0.5em}\langle \phi_{^3\Pi_{\Omega^{\prime}}}|[\Delta U_{^3\Pi_{\Omega^{\prime}}-X}]^3\vert d_{^3\Pi_{\Omega^{\prime}}-X}\vert^2|\phi_{^3\Pi_{\Omega^{\prime}}}\rangle, \end{eqnarray} where $\Delta U_{^3\Pi_{\Omega^{\prime}}-X} = U_{a^3\Pi_{\Omega^{\prime}}} - U_X$. The advantage of the Eq.(\ref{tausum}) is that the phases of TDM functions and non-adiabatic wavefunctions can be undefined. \section{Results and discussion} The excitation energies, $\Delta U^{ab}_{\Omega-X}(R)$, and transition dipole moments, $d^{ab}_{\Omega-X}(R)$, obtained for the $(1,2)1-X0^+$ transitions by using the fully relativistic FS-RCC method are given in Table~\ref{TabTDM}, together with the relevant sensitivity coefficients, $K^{\alpha}_{f}$, evaluated according to Eq.(\ref{Kalpha}). The relativistic $(2)0^+-(1)0^+$ and $(1,2)1-(1)0^+$ TDM functions obtained in the framework of the computational schemes discussed above (FS-RCC, RCI and SSI) are depicted on Fig.~\ref{Fig_TDM} and Fig.~\ref{Fig_AXTDM}, where they are compared to each other and with previous theoretical~\cite{Kirby1989} and experimental~\cite{Leon1988} counterparts. Fragments of the resulting relativistic potential energy curves (PECs) $U_{\Omega}(R)$ are shown on Fig.~\ref{Fig_PEC} along with the empirical RKR potentials~\cite{RKR_CO} corresponding to the Hund's ``\textbf{a}'' coupling case~\cite{Field2004book}. The radiative lifetimes evaluated by Eq.(\ref{tausum}) for the particular rovibrational levels of the metastable $a^3\Pi_{0^{\pm},1,2}$ substates are compared with their experimental counterparts~\cite{Fournier, tauCO2007, tauCO1999, tauCO2000} in Table~\ref{Tabtau}. The complete tables of the \emph{ab initio} FS-RCC, RCI and SSI transition dipole moments and resulting radiative lifetimes are tabulated in machine-readable format in the Supplemented Material (SM). \begin{figure}[t!] \includegraphics[scale=0.4]{Fig2.eps} \caption{The \emph{ab initio} $a^3\Pi_{0^+,1} - X^1\Sigma^+$ electronic transition dipole moments evaluated using the different computational schemes: FS-RCC - fully relativistic Fock space coupled cluster method combined with the finite-field (FF) approach and generalized relativistic effective core potentials (GRPPs), RCI - large scale fully relativistic multi-reference configuration interaction method, SSI - scalar-state-interacting calculation based on the GRPPs exploiting, MCQR - multi-configuration quadratic response approach as implemented in Ref.\cite{Minaev1995}.}\label{Fig_TDM} \end{figure} It should be noted that the theoretical PECs were obtained for the excited $\Omega=0^{\pm},1$ states by adding the \emph{ab initio} calculated vertical excitation energies to the highly accurate empirical ground state potential $U^{emp}_X$ from Ref.~\cite{Meshkov2018}: $U_{\Omega}(R) = \Delta U^{ab}_{\Omega-X}(R) + U^{emp}_{X^1\Sigma^+}(R)$ (cf.~\cite{Zaitsevskii2005}). As expected the resulting relativistic PECs demonstrate the avoided crossing in the vicinity of the points $R_{A-a'}\approx 1.15$~(\AA) and $R_{a-a'}\approx 1.4$~(\AA) which correspond to the crossing of the singlet-triplet $A^1\Pi-a'^3\Sigma^+$ and triplet-triplet $a^3\Pi-a'^3\Sigma^+$ states, respectively. This effect is most clearly observed as abrupt changes of the relativistic $(1,2)1-X0^+$ TDM functions near the real crossing points of the relevant multiplet states (see Fig.~\ref{Fig_TDM} and Fig.~\ref{Fig_AXTDM}). \begin{figure}[t!] \includegraphics[scale=0.4]{Fig3.eps} \caption{The relativistic electronic transition moments obtained for the spin-allowed $A^1\Pi - X^1\Sigma^+$ transition in the framework of the above computational schemes: FS-RCC, RCI and SSI. ``CI'' stands for the non-relativistic MRCI calculations performed in Ref.~\cite{Kirby1989}; the empirical $A-X$ TDM function is borrowed from Ref.~\cite{Leon1988}. The inset presents the fragment of the spin-forbidden $a'^3\Sigma_1^+ - X^1\Sigma^+$ TDM function extracted from the $(2)1 - X0^+$ FS-RCC result for the $R\in [1.2, 1.4]$~(\AA) interval.}\label{Fig_AXTDM} \end{figure} Overall good agreement is obviously observed among the present \emph{ab initio} TDM functions evaluated in the framework of the alternative computational schemes: FS-RCC, RCI and SSI. The results of the finite-field transition moment calculations within the FS-RCC method should be apparently considered as the most reliable ones since the radiative lifetimes based of the FS-RCC functions are found to be remarkably close to their most accurate experimental counterparts~\cite{tauCO2007} measured for (1$^+$,0,1) and (2$^+$,0,2) rovibrational levels of the $a^3\Pi$ state (see Table~\ref{Tabtau}). Furthermore, the FS-RCC $a^3\Pi_{\Omega=0^+,1} - X^1\Sigma^+$ TDM functions reproduce very well the distinguishable $\tau$-values measured for the different $\Omega^{\pm}$-components of the (0$^\pm$,3,2) level~\cite{tauCO2000}. At the same time, the present $\tau$ estimates for the (0$^+$,0,$J'$) level are significantly higher than the experimental 90 ms obtained for CO molecules trapped in solid neon matrices~\cite{Fournier}. This difference can be understood by bearing in mind the exponential decrease of the radiative lifetimes as $J'$-value increase, namely: $\tau$= 690, 250, 115 and 65 ms for $J'$= 0, 1, 2 and 3 levels, respectively. The fully relativistic FS-RCC and RCI calculations generally confirm the validity of the previous MCQR TDM functions obtained for the $a^3\Pi_{\Omega=0^+,1} - X^1\Sigma^+$ transitions due to implementation of the multi-configuration quadratic response approach~\cite{Minaev1995} (see Fig.~\ref{Fig_TDM}). The relativistic electronic transition moments obtained for the spin-allowed $A^1\Pi - X^1\Sigma^+$ transition in the framework of the above FS-RCC, RCI and SSI computational schemes also agree very well with results of the conventional non-relativistic calculations~\cite{Kirby1989} and empirical data~\cite{Leon1988} (see Fig.~\ref{Fig_AXTDM}). It is interesting that the part of the relativistic FS-RCC $d_{(2)1 - (1)0^+}(R)$ function can be approximately assigned to the spin-forbidden $a'^3\Sigma_1^+ - X^1\Sigma^+$ transition. The Table~\ref{TabTDM} clearly demonstrates that the sensitivity coefficients of TDM functions, $K^{\alpha}_d$, corresponding to spin-forbidden transitions are very close to 2 indeed, as it should be expected from the trivial argument of perturbation theory: $d\varpropto \hat{H}_{so}\varpropto \alpha^{2}$. At the same time, the $K^{\alpha}_d$-values for the spin-allowed transitions are 1000 times smaller. The sensitivity coefficients of the excitation energy, $K^{\alpha}_{\Delta U}$, corresponding to the Cameron system, is about $2\times 10^{-3}$ while for the higher $(2)1 - X0^+$ transition the relevant $K^{\alpha}_d$-values are about $2.5\div 3.5\times 10^{-4}$. All sensitivity coefficients are very smooth functions of internuclear distance except narrow regions in the vicinity of the avoided crossing points. \begin{table*} \caption{The excitation energies, $\Delta U_{\Omega-X}$ (in cm$^{-1}$), transition dipole moments, $d_{\Omega-X}(R)$ (in $a.u.$), and dimensionless sensitivity coefficients, $K^{\alpha}_d$ and $K^{\alpha}_{\Delta U}$, obtained for the relativistic $(1,2)1-X0^+$ transitions of the CO molecule as functions of internuclear distance $R$ (in~\AA) calculated at the FS-RCC level.}\label{TabTDM} \begin{tabular}{ccccccccc} \hline\hline & \multicolumn{4}{c}{$(1)1-X0^+$ transition} & \multicolumn{4}{c}{$(2)1-X0^+$ transition}\\ $R$ & $\Delta U$ & $K^{\alpha}_{\Delta U}\!\times\! 10^{-3}$ & $d\!\times\! 10^{-3}$ & $K^{\alpha}_d$ & $\Delta U$ & $K^{\alpha}_{\Delta U}\!\times\! 10^{-4}$ & $d\!\times\! 10^{-3}$ & $K^{\alpha}_d$\\ \hline 1.075 & 55933 & 2.017 & 1.562 & 2.0004 & 73774 & 3.638 & 718.7 & -0.0020 \\ 1.100 & 53971 & 2.035 & 1.520 & 1.9996 & 71792 & 3.563 & 679.3 & -0.0022 \\ 1.150 & 50311 & 2.057 & 1.446 & 1.9993 & 66465 & -2.545 & 9.926 & 1.9252 \\ 1.200 & 46985 & 2.063 & 1.387 & 1.9995 & 58971 & -2.561 & 2.460 & 1.9769 \\ 1.250 & 43949 & 2.056 & 1.339 & 1.9993 & 52256 & -2.591 & 1.433 & 1.9889 \\ 1.300 & 41159 & 2.035 & 1.298 & 1.9968 & 46230 & -2.556 & 1.030 & 2.0000 \\ 1.350 & 38569 & 1.993 & 1.262 & 1.9922 & 40821 & -2.391 & 0.820 & 2.0261 \\ 1.375 & 37338 & 1.937 & 1.239 & 1.9757 & 38329 & -2.116 & 0.764 & 2.0785 \\ 1.400 & 35965 & -0.801 & 0.491 & 1.4647 & 36149 & 11.59 & 1.327 & 2.0813 \\ \hline \end{tabular} \end{table*} \begin{table} \caption{Comparison of the present theoretical and available experimental radiative lifetimes (in milliseconds) for the $a^3\Pi(\Omega^{\pm},v^{\prime},J^{\prime})$ rovibronic levels of the $^{12}$C$^{16}$O isotopomer. $^a$ - Ref.~\cite{Fournier}, $^b$ - Ref.~\cite{tauCO2000}, $^c$ - Ref.~\cite{tauCO1999}, $^d$ - Ref.~\cite{tauCO2007}.}\label{Tabtau} \begin{tabularx}{\columnwidth}{p{0.05\columnwidth}p{0.05\columnwidth}p{0.05\columnwidth}p{0.15\columnwidth}p{0.15\columnwidth}p{0.15\columnwidth}p{0.25\columnwidth}} \hline\hline $\Omega^{\pm}$ & $v^{\prime}$ & $J^{\prime}$ & \multicolumn{3}{c}{Theory} & Experiment \\ & & & FS-RCC & RCI & SSI \\ \hline 0$^+$ & 0 & 0 & 690 & 640 & 285 & 90$\pm$2~$^a$\\ 0$^+$ & 3 & 3 & 62 & 58 & 66 & 73$\pm$22~$^b$\\ \hline 0$^+$ & 3 & 2 & 102 & 84 & 104 & 119$\pm$36~$^b$\\ 0$^-$ & 3 & 2 & 147 & 170 & 165 & 147$\pm$44~$^b$\\ \hline 1$^+$ & 0 & 1 & 2.59 & 3.02 & 2.91 & 2.63$\pm$0.03~$^d$\\ 1$^+$ & 3 & 1 & 2.67 & 3.15 & 3.02 & 3.46$\pm$0.33~$^b$\\ 1$^-$ & 3 & 2 & 2.75 & 3.24 & 3.11 & 3.04$\pm$0.38~$^c$\\ & & & & & & 3.67$\pm$0.33~$^b$\\ \hline 2$^+$ & 0 & 2 & 142 & 165 & 159 & 143$\pm$4~$^d$\\ 2$^+$ & 3 & 2 & 154 & 179 & 173 & 211$\pm$63~$^b$\\ 2$^+$ & 3 & 3 & 65 & 75 & 73 & 72$\pm$22~$^b$\\ \hline \end{tabularx} \end{table} \section{Concluding remarks} The transition probabilities of the intercombination Cameron system of carbon monoxide have been computationally studied in the framework of three different \emph{ab initio} methods. The most reliable spin-forbidden $a^3\Pi_{\Omega=0^+,1} - X^1\Sigma^+$ transition dipole moments were obtained using the multi-reference Fock space coupled cluster method which was combined with the empty-core generalized relativistic pseudopotential model to introduce proper treatment of relativistic effects (including Breit interactions) into all-electron correlation calculations. The radiative lifetimes evaluated for the particular rovibronic levels of the fine structure $a^3\Pi_{\Omega=0^{\pm},1,2}$ components are in a very good agreement with their most accurate experimental counterparts. The sensitivity coefficients of the CO Cameron system to a presumable drift of the fine structure constant have demonstrated very smooth $R$-dependence except the narrow region of the local spin-orbit coupling. It means that the future deperturbation analysis of even very small $a^3\Pi\sim a'^3\Sigma^+$ perturbations can provide much higher sensitivity coefficients of the CO Cameron bands. \section*{Acknowledgements} The work was supported by the Russian Science Foundation (RSF), Grant No.18-13-00269. The work on the GRPP generation for the light elements was supported by the personal scientific fellowship for N.S.\ Mosyagin from the governor of the Leningrad district.
proofpile-arXiv_059-15741
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction}\label{sec1}\setcounter{equation}{0} We consider the \emph{ stochastic Burgers-Huxley equation} for $(x,t)\in\mathcal{O}\times(0,T)=(0,1)\times(0,T)$ with a random force as (see \cite{JS}) \begin{align}\label{1.1} du(t)=\left(\nu\frac{\partial^2u(t)}{\partial x^2}-\alpha u(t)\frac{\partial u(t)}{\partial x}+\beta u(t)(1-u(t))(u(t)-\gamma)\right)dt+\sigma(t,u(t))dW(t), \end{align} where $\alpha>0$ is the advection coefficient, $\nu,\beta>0$ and $\gamma\in(0,1)$ are parameters. In \eqref{1.1}, $W(\cdot)$ is an $\mathrm{L}^2(\mathcal{O})$-valued $Q$-Wiener process. We supplement \eqref{1.1} with Dirichlet boundary conditions \begin{align}\label{1.5} u(0,t)=u(1,t)=0, \end{align} and the initial condition \begin{align}\label{1.6} u(x,0)=u_0(x), \ x\in\overline{\mathcal{O}}. \end{align} Equation \eqref{1.1} describes a prototype model for describing the interaction between reaction mechanisms, convection effects and diffusion transports. Our goal in this work is to study the global solvability results as well as asymptotic analysis of solutions to the problem \eqref{1.1} with boundary and initial conditions \eqref{1.5} and \eqref{1.6}. We use a stochastic generalization of localized version of the Minty-Browder technique to obtain the global strong solution. A local monotonicity property of the linear and nonlinear operators is exploited in the proofs. The inviscid limit of the stochastic Burgers-Huxley equation to Burgers and Huxley equations is also discussed. For additive Gaussian noise case, using energy estimates and Doob's martingale inequality, exponential estimates for exit time for solutions of the stochastic Burgers-Huxley equation is obtained. We also established a similar estimate for exit time by using Freidlin-Wentzell type large deviations principle. The existence of a unique ergodic and strongly mixing invariant measure for the stochastic Burgers-Huxley equation is established by making use of the exponential stability of solutions. For $\alpha=0$, the equation \eqref{1.1} takes the form \begin{align}\label{2} du(t)=\left(\nu\frac{\partial^2u(t)}{\partial x^2}+\beta u(t)(1-u(t))(u(t)-\gamma)\right)dt+\sigma(t,u(t)) dW(t), \end{align} which is known as the \emph{stochastic Huxley equation} and it describes nerve pulse propagation in nerve fibers and wall motion in liquid crystals (\cite{XYW1}). For $\beta=0$ and $\alpha=1$, the equation \eqref{1.1} can be reduced to \begin{align}\label{3} du(t)=\left(\nu\frac{\partial^2u(t)}{\partial x^2}- u(t)\frac{\partial u(t)}{\partial x}\right)dt+\sigma(t,u(t)) dW(t), \end{align} which is the well-known \emph{stochastic viscous Burgers equation}. In \cite{JMB}, Burgers studied the deterministic model for modeling the turbulence phenomena (see \cite{HB,JMB1} also). The authors in \cite{GDP} proved the existence and uniqueness of a global mild solution as well as the existence of an invariant measure of the stochastic Burgers' equation perturbed by cylindrical Gaussian noise. The authors in \cite{BCJ94} studied the Burgers Equation perturbed by a white noise in space and time, and proved the existence of solutions by showing that the Cole-Hopf transformation is meaningful also in the stochastic case. The global existence and uniqueness of the strong, weak and mild solutions for one-dimensional Burgers equation perturbed by L\'evy noise is established in \cite{ZDTG}. The Burgers equation perturbed by a multiplicative white noise is considered in \cite{GDP2} and proved the existence and uniqueness of the global solution as well as the strong Feller property and irreducibility for the corresponding transition semigroup (see also Chapter 14, \cite{GDJZ}). Moreover, the existence and uniqueness of an invariant measure is also established. Exponential ergodicity for stochastic Burgers equations is established in \cite{BGBM}. Control problems and dynamic programming of the stochastic Burgers equation have been carried out in \cite{GDP1,GDP2,HCR}, etc. The stochastic generalized Burgers-Huxley equation perturbed by cylindrical Gaussian noise is considered in \cite{MTM2} and proved the existence of a unique global mild solution using a fixed point method and stopping time arguments. The works \cite{AAIG,DBAJ,MHJV,ATHZ}, etc considered numerical analysis of stochastic Burgers equations. Different mathematical problems regarding stochastic Burgers equation is available in the literature and interested readers are referred to see \cite{RBLB,ZBLD,LBGG,PGMJ,FYW,ATHZ1}, etc and references therein. The rest of the paper is organized as follows. In the next section, we provide the abstract formulation of the problem and provide the necessary function spaces needed to obtain the global solvability results of the system \eqref{1.1}-\eqref{1.6}. A local monotonicity property as well as hemicontinuity of the linear and nonlinear operators in $\mathrm{L}^{\infty}$ ball is proved in the same section (Theorem \ref{monotone} and Lemma \ref{lem2.5}). The existence and uniqueness of global strong solution is established in section \ref{sec3} using a stochastic generalization of localized version of the Minty-Browder technique (Theorem \ref{exis}). The inviscid limit of the stochastic Burgers-Huxley equation to the stochastic Burgers equation (as $\beta\to 0$, Proposition \ref{prop5.1}) as well as to the stochastic Huxley equation (as $\alpha\to 0$, Proposition \ref{prop4.2}) is discussed in section \ref{sec5}. The system \eqref{1.1}-\eqref{1.6} perturbed by additive Gaussian noise is considered in sections \eqref{sec6} and \eqref{sec9}. Using energy estimates and Doob's martingale inequality, the exponential estimates for exit from a ball of radius $R$ by time $T$ for strong solutions of the stochastic Burgers-Huxley equation is derived in section \ref{sec6} (Remark \ref{rem5.10}). We studied the exit time estimates in the context of Freidlin-Wentzell type large deviations principle also in the same section (Theorem \ref{thm4.8}). In section \ref{sec9}, we prove the exponential moment energy estimates for the strong solution to the system \eqref{1.1}-\eqref{1.6} and also the exponential stability of solutions (Theorems \ref{expe} and \ref{exps}). Finally, we establish the existence of a unique ergodic and strongly mixing invariant measure for the stochastic Burgers-Huxley equation, using the exponential stability of solutions (Theorems \ref{EIM} and \ref{UEIM}). \section{Mathematical formulation}\label{sec2}\setcounter{equation}{0} In this section, we present the necessary function spaces, properties of linear and nonlinear operators used to obtain the global solvability results of the system \eqref{1.1}-\eqref{1.6}. We show that the sum of linear and nonlinear operator is locally monotone and hemicontinuous. \subsection{Functional setting} Let $\mathrm{C}_0^{\infty}(\mathcal{O})$ denotes the space of all infinitely differentiable functions with compact support on $\mathcal{O}$. For $p\in[2,\infty)$, the Lebesgue spaces are denoted by $\mathrm{L}^p(\mathcal{O})$ and the Sobolev spaces are denoted by $\mathrm{H}^{k}(\mathcal{O})$. The norm in $\mathrm{L}^2(\mathcal{O})$ is denoted by $\|\cdot\|_{\mathrm{L}^2}$ and the inner product in $\mathrm{L}^2(\mathcal{O})$ is denoted by $(\cdot,\cdot)$. Let $\mathrm{H}_0^1(\mathcal{O})$ denotes closure of $\mathrm{C}_0^{\infty}(\mathcal{O})$ in $\|\partial_x\cdot\|_{\mathrm{L}^2}$ norm. As $\mathcal{O}$ is a bounded domain, note that $\|\partial_x\cdot\|_{\mathrm{L}^2}$ defines a norm on $\mathrm{H}^1_0(\mathcal{O})$ and we have the continuous embedding $\mathrm{H}_0^1(\mathcal{O})\subset\mathrm{L}^2(\mathcal{O})\subset\mathrm{H}^{-1}(\mathcal{O})$, where $\mathrm{H}^{-1}(\mathcal{O})$ is the dual space of $\mathrm{H}_0^1(\mathcal{O})$. Remember that the embedding of $\mathrm{H}_0^1(\mathcal{O})\subset\mathrm{L}^2(\mathcal{O})$ is compact. The duality paring between $\mathrm{H}_0^1(\mathcal{O})$ and $\mathrm{H}^{-1}(\mathcal{O})$ is denoted by $\langle\cdot,\cdot\rangle$. In one dimensions, we have the following continuous embedding: $\mathrm{H}_0^1(\mathcal{O})\subset\mathrm{L}^{\infty}(\mathcal{O})\subset\mathrm{L}^p(\mathcal{O}),$ for all $p\in[1,\infty)$ (see \cite{HM}). \subsection{Linear operator} Let $A$ denotes the self-adjoint and unbounded operator on $\mathrm{L}^2(\mathcal{O})$ defined by\footnote{Strictly speaking one has to define $A:=-\frac{d^2}{dx^2}$.} \begin{align*} A u:=-\frac{\partial^2u}{\partial x^2}, \end{align*} with domain $D(A)= \mathrm{H}^2(\mathcal{O})\cap\mathrm{H}_0^1(\mathcal{O})=\{u\in\mathrm{H}^2(\mathcal{O}):u(0)=u(1)=0\}.$ The eigenvalues and the corresponding eigenfunctions of $A$ are given by \begin{align*} \lambda_k=k^2 \pi^2, \text{and } \ w_k(x)=\sqrt{\frac{2}{\pi}}\sin (k\pi x), \ k=1,2\ldots. \end{align*} Since $\mathcal{O}$ is a bounded domain, $A^{-1}$ exists and is a compact operator on $\mathrm{L}^2(\mathcal{O})$. Moreover, one can define the fractional powers of $A$ and $$\|A^{1/2}u\|_{\mathrm{L}^2}^2=\sum_{j=1}^{\infty}\lambda_j\langle u,w_j\rangle|^2\geq \lambda_1\sum_{j=1}^{\infty}\langle u,w_j\rangle|^2=\lambda_1\|u\|_{\mathrm{L}^2}^2=\pi^2\|u\|_{\mathrm{L}^2}^2,$$ which is the Poincar\'e inequality. Note also that $\|u\|_{\mathrm{H}^{s}}=\|A^{s/2}u\|_{\mathrm{L}^2},$ for all $s\in\mathbb{R}$. An integration by parts yields $$(Au,v)=(\partial_xu,\partial_xv)=:a(u,v), \ \text{ for all } \ v\in\mathrm{H}_0^1(\mathcal{O}),$$ so that $A:\mathrm{H}_0^1(\mathcal{O})\to\mathrm{H}^{-1}(\mathcal{O})$. \iffalse Let us define the operator $A_p=-\frac{\partial^2}{\partial x^2}$ with $D(A_p)=\mathrm{W}_0^{1,p}(\mathcal{O})\cap\mathrm{W}^{2,p}(\mathcal{O}),$ for $1< p<\infty$ and $D(A_1)=\{u\in\mathrm{W}^{1,1}(\mathcal{O}):u\in\mathrm{L}^1(\mathcal{O})\},$ for $p=1$. From Proposition 4.3, Chapter 1 \cite{VB}, we know that for $1\leq p<\infty$, $A_p$ generates an analytic semigroup of contractions in $X=\mathrm{L}^p(\mathcal{O})$. \fi \subsection{Nonlinear operators} Let us now define $b:\mathrm{H}_0^1(\mathcal{O})\times\mathrm{H}_0^1(\mathcal{O})\times \mathrm{H}_0^1(\mathcal{O})\to\mathbb{R}$ as $$b(u,v,w)=\int_0^1u(x)\frac{\partial v(x)}{\partial x}w(x)dx.$$ Using an integration by parts and boundary conditions, it can be easily seen that \begin{align}\label{6} b(u,u,u)=(u\partial_xu,u)=\int_0^1(u(x))\frac{\partial u(x)}{\partial x}u(x)d x=\frac{1}{3}\int_0^1\frac{\partial}{\partial x}(u(x))^{3}d x=0. \end{align} \iffalse In general, for all $p>2$ and $u\in\mathrm{H}_0^1(\mathcal{O})$, we consider \begin{align} b(u,u,|u|^{p-2}u)&=(u^{\delta}\partial_xu,|u|^{p-2}u)=\frac{1}{\delta+2}\int_0^1\frac{\partial}{\partial x}(u(x))^{\delta+2}|u(x)|^{p-2}d x\nonumber\\&=-\frac{1}{\delta+2}\int_0^1(u(x))^{\delta+2}\frac{\partial}{\partial x}|u(x)|^{p-2}d x\nonumber\\&=-\frac{p-2}{\delta+2}\int_0^1(u(x))^{\delta+2}|u(x)|^{p-4}u(x)\frac{\partial u(x)}{\partial x}d x\nonumber\\&=-\frac{p-2}{\delta+2}(u^{\delta}\partial_xu,|u|^{p-2}u). \end{align} The above expression implies \begin{align}\label{7a}b(u,u,|u|^{p-2}u)=(u^{\delta}\partial_xu,|u|^{p-2}u)=0,\end{align} for all $p>2$ and $u\in\mathrm{H}_0^1(\mathcal{O})$. Moreover, for all $u,v\in\mathrm{H}_0^1(\mathcal{O})$, we have \begin{align}\label{6a} b(u,u,v)&=(u^{\delta}\partial_xu,v)=\frac{1}{\delta+1}\int_0^1\frac{\partial }{\partial x}(u(x))^{\delta+1} v(x)d x=-\frac{1}{\delta+1}\int_0^1(u(x))^{\delta+1}\frac{\partial v(x)}{\partial x}d x\nonumber\\&=-\frac{1}{\delta+1}(u^{\delta+1},\partial_xv). \end{align} \fi For $w\in\mathrm{L}^2(\mathcal{O})$, we can define an operator $B(\cdot,\cdot):\mathrm{H}_0^1(\mathcal{O})\times\mathrm{H}_0^1(\mathcal{O})\to\mathrm{L}^2(\mathcal{O})$ by \begin{align*} (B(u,v),w)=b(u,v,w)\leq\|u\|_{\mathrm{L}^{\infty}}\|\partial_xv\|_{\mathrm{L}^2}\|w\|_{\mathrm{L}^{2}}\leq\|u\|_{\mathrm{H}_0^1}\|v\|_{\mathrm{H}_0^1}\|w\|_{\mathrm{L}^2}, \end{align*} so that $\|B(u,v)\|_{\mathrm{L}^2}\leq \|u\|_{\mathrm{H}_0^1}\|v\|_{\mathrm{H}_0^1}$. We denote $B(u)=B(u,u)$, so that one can easily obtain $\|B(u)\|_{\mathrm{L}^2}\leq \|u\|_{\mathrm{H}_0^1}^{2}$. Moreover, for all $v\in\mathrm{H}_0^1(\mathcal{O})$, we have \begin{align*} |\langle B(u),v\rangle| =|\langle u\partial_xu,v\rangle| =\left|-\frac{1}{2}\langle u^2,\partial_xv\rangle\right| \leq \frac{1}{2}\|u\|_{\mathrm{L}^4}^2\|\partial_xv\|_{\mathrm{L}^2}, \end{align*} so that $B(\cdot):\mathrm{L}^4(\mathcal{O})\to\mathrm{H}^{-1}(\mathcal{O})$ and \begin{align}\label{2p5} \|B(u)\|_{\mathrm{H}^{-1}}\leq \frac{1}{2}\|u\|_{\mathrm{L}^4}^2. \end{align} Using H\"older's inequality, we have \iffalse \begin{align*} \langle B(u)-B(v),w\rangle &=(u\partial_xu-v\partial_xv,w)=(w\partial_xu,w)+(v\partial_xw,w)\nonumber\\&=(w\partial_xw,w)+(w\partial_xv,w)+(v\partial_xw,w)\nonumber\\&=(\partial_xv,w^2)+\frac{1}{2}(v,\partial_xw^2)=\frac{1}{2}(v,w\partial_xw)\leq\frac{1}{2}\|v\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^2}\|\partial_xw\|_{\mathrm{L}^2}, \end{align*} \begin{align*} &(B(u)-B(v),w)\nonumber\\&=(u^{\delta}\partial_x(u-v),w)+((u^{\delta}-v^{\delta})\partial_xv,w)\nonumber\\&=-\frac{\delta}{2}(u^{\delta-1}\partial_xu,w^2)+\delta((\theta_1 u+(1-\theta_1)v)^{\delta-1}\partial_xv,w^2)\nonumber\\&\leq\frac{\delta}{2}\|u\|_{\mathrm{L}^{\infty}}^{\delta-1}\|\partial_xu\|_{\mathrm{L}^2}\|w\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^2}+\delta 2^{\delta}\left(\|u\|_{\mathrm{L}^{\infty}}^{\delta-1}+\|v\|_{\mathrm{L}^{\infty}}^{\delta-1}\right)\|\partial_xv\|_{\mathrm{L}^2}\|w\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^2}, \end{align*} so that \begin{align} \|B(u)-B(v)\|_{\mathrm{L}^2}&\leq\frac{C\delta}{2}\|u\|_{\mathrm{H}_0^1}^{\delta}\|w\|_{\mathrm{H}_0^1}+C\delta 2^{\delta}\left(\|u\|_{\mathrm{H}_0^1}^{\delta-1}+\|v\|_{\mathrm{H}_0^1}^{\delta-1}\right)\|v\|_{\mathrm{H}_0^1}\|w\|_{\mathrm{H}_0^1}\nonumber\\&\leq C\delta(1+2^{\delta})r^{\delta}\|w\|_{\mathrm{H}_0^1}, \end{align} for $\|u\|_{\mathrm{H}_0^1},\|v\|_{\mathrm{H}_0^1}\leq r$. so that \begin{align} \|B(u)-B(v)\|_{\mathrm{H}^{-1}}\leq \frac{1}{2}\|v\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^2}\leq\frac{C}{2}\|v\|_{\mathrm{H}_0^1}\|w\|_{\mathrm{L}^2}. \end{align} Thus the operator $B(\cdot):\mathrm{H}_0^1(\mathcal{O})\to\mathrm{H}^{-1}(\mathcal{O})$ is locally Lipschitz. Moreover, we obtain \begin{align*} (B(u)-B(v),w)=-(\partial_xv,w^2)\leq \|\partial_xv\|_{\mathrm{L}^2}\|w\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^2}, \end{align*} so that \fi \begin{align}\label{2.1} \|B(u)-B(v)\|_{\mathrm{L}^2}&=\|u\partial_xu-v\partial_xv\|_{\mathrm{L}^2}\leq\|(u-v)\partial_xu\|_{\mathrm{L}^2}+\|v\partial_x(u-v)\|_{\mathrm{L}^2}\nonumber\\&\leq\|u-v\|_{\mathrm{L}^{\infty}}\|u\|_{\mathrm{H}_0^1}+\|v\|_{\mathrm{L}^{\infty}}\|u-v\|_{\mathrm{H}_0^1}\nonumber\\&\leq C(\|u\|_{\mathrm{H}_0^1}+\|v\|_{\mathrm{H}_0^1})\|u-v\|_{\mathrm{H}_0^1}, \end{align} and hence the operator $B:\mathrm{H}_0^1(\mathcal{O})\to\mathrm{L}^2(\mathcal{O})$ is locally Lipschitz. Moreover, we have \begin{align} \langle B(u)-B(v),w\rangle=\frac{1}{2}(\partial_xu^2-\partial_xv^2,w)=-\frac{1}{2}((u-v)(u+v),\partial_xw). \end{align} Using H\"older's inequality, we get \begin{align} |\langle B(u)-B(v),w\rangle|\leq\frac{1}{2}(\|u\|_{\mathrm{L}^4}+\|v\|_{\mathrm{L}^4})\|u-v\|_{\mathrm{L}^4}\|w\|_{\mathrm{H}_0^1}, \end{align} and hence $\|B(u)-B(v)\|_{\mathrm{H}^{-1}}\leq \frac{1}{2}(\|u\|_{\mathrm{L}^4}+\|v\|_{\mathrm{L}^4})\|u-v\|_{\mathrm{L}^4}$, so that the operator $B:\mathrm{L}^4(\mathcal{O})\to\mathrm{H}^{-1}(\mathcal{O})$ is locally Lipschitz. Let us define $c(u)=u(1-u)(u-\gamma)$. Using H\"older's and Young's inequalities, we have \begin{align}\label{7} (c(u),u)&=(u(1-u)(u-\gamma),u)=((1+\gamma)u^{2}-\gamma u-u^{3},u)\nonumber\\&=(1+\gamma)(u^{2},u)-\gamma\|u\|_{\mathrm{L}^2}^2-\|u\|_{\mathrm{L}^{4}}^{4}\nonumber\\&\leq(1+\gamma)\|u\|_{\mathrm{L}^4}^2\|u\|_{\mathrm{L}^2}-\gamma\|u\|_{\mathrm{L}^2}^2-\|u\|_{\mathrm{L}^{4}}^{4}\nonumber\\&\leq -\frac{1}{2}\|u\|_{\mathrm{L}^4}^4+\frac{(1+\gamma^2)}{2}\|u\|_{\mathrm{L}^2}^2, \end{align} for all $u\in\mathrm{L}^{4}(\mathcal{O})$. Using H\"older's inequality, for $u,v\in\mathrm{H}_0^1(\mathcal{O})$, we get \begin{align}\label{2p7} \|c(u)-c(v)\|_{\mathrm{L}^2}&=\|(1+\gamma)(u^{2}-v^{2})-\gamma(u-v)-(u^{3}-v^{3})\|_{\mathrm{L}^2}\nonumber\\&\leq (1+\gamma)\|u+v\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^2}+\gamma\|w\|_{\mathrm{L}^2}+\|u^2+uv+v^2\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^2}\nonumber\\&\leq\left[(1+\gamma)(\|u\|_{\mathrm{L}^{\infty}}+\|v\|_{\mathrm{L}^{\infty}})+\gamma+\frac{1}{2}(\|u\|_{\mathrm{L}^{\infty}}^2+\|v\|_{\mathrm{L}^{\infty}}^2)\right]\|w\|_{\mathrm{L}^2}, \end{align} and the operator $c(\cdot):\mathrm{H}_0^1(\mathcal{O})\to\mathrm{L}^2(\mathcal{O})$ is locally Lipschitz. For more details see \cite{MTM1}. \iffalse \begin{remark} Note that \begin{align*} |\partial_x(|\varphi|^{p/2})|=|\partial_x(|\varphi|^2)^{p/4}|=\frac{p}{2}|\varphi|^{\frac{p-2}{2}}|\partial_x\varphi|, \end{align*} so that \begin{align*} \|\partial_x(|\varphi|^{p/2})\|_{\mathrm{L}^2}=\frac{p}{2}\||\varphi|^{\frac{p-2}{2}}\partial_x\varphi\|_{\mathrm{L}^2}. \end{align*} \iffalse From Theorem 8.8, \cite{HM}, we know that the injection $\mathrm{W}^{1,1}(\mathcal{O})\subset\mathrm{L}^q(\mathcal{O})$ is compact for all $1\leq q<\infty$. Using Poincar\'e inequality, we also have $\|\varphi\|_{\mathrm{W}^{1,1}}\leq C\|\partial_x\varphi\|_{\mathrm{L}^1}\leq C\|\partial_x\varphi\|_{\mathrm{L}^2}$, for all $\varphi\in\mathrm{W}^{1,1}(\mathcal{O})$. \fi Using Poincar\'e inequality, we have $\|\varphi\|_{\mathrm{L}^2}\leq \frac{1}{\sqrt{\pi^2}}\|\partial_x\varphi\|_{\mathrm{L}^2}$. Thus, it is immediate that \begin{align}\label{111} \|\varphi\|_{\mathrm{L}^p}^{p}=\||\varphi|^{p/2}\|_{\mathrm{L}^2}^2\leq\frac{1}{\pi^2}\|\partial_x(|\varphi|^{p/2})\|_{\mathrm{L}^2}^2=\frac{p^2}{4\pi^2}\||\varphi|^{\frac{p-2}{2}}\partial_x\varphi\|_{\mathrm{L}^2}^2, \end{align} for all $\varphi\in\mathrm{H}_0^1(\mathcal{O})$. \end{remark} \fi \subsection{Local monotonicity} We show that the operator $F(\cdot)=\nu A+\alpha B(\cdot)-\beta c(\cdot)$ is locally monotone. \begin{definition} Let $\mathrm{X}$ be a Banach space and let $\mathrm{X}^{'}$ be its topological dual. An operator $\mathrm{F}:\mathrm{D}\rightarrow \mathrm{X}^{'},$ $\mathrm{D}=\mathrm{D}(\mathrm{F})\subset \mathrm{X}$ is said to be \emph{monotone} if $\langle\mathrm{F}(x)-\mathrm{F}(y),x-y\rangle\geq 0,$ for all $x,y\in \mathrm{D}$. \iffalse The operator $\mathrm{F}(\cdot)$ is \emph{maximal monotone} if there is no monotone operator that properly contains it, that is, if for $x\in\mathrm{X}$ and $w\in\mathrm{X}'$, the inequality $\langle w-\mathrm{F}(x),x-y\rangle\geq 0$, for all $y\in\mathrm{X}$ implies $w=\mathrm{F}(x)$. \fi The operator $\mathrm{F}(\cdot)$ is said to be \emph{hemicontinuous} if, for all $x, y\in\mathrm{X}$ and $w\in\mathrm{X}'$ $$\lim_{\lambda\to 0}\langle\mathrm{F}(x+\lambda y),w\rangle=\langle\mathrm{F}(x),w\rangle.$$ The operator $\mathrm{F}(\cdot)$ is called \emph{demicontinuous} if for all $x\in\mathrm{D}$ and $y\in\mathrm{X}$, the functional $x \mapsto\langle \mathrm{F}(x), y\rangle$ is continuous, or in other words, $x_k\to x$ in $\mathrm{X}$ implies $\mathrm{F}(x_k)\xrightarrow{w}\mathrm{F}(x)$ in $\mathrm{X}'$. Clearly demicontinuity implies hemicontinuity. \end{definition} \begin{theorem}\label{monotone} For a given $r > 0$, we consider the following (closed) $\mathrm{L}^{\infty}$-ball $\mathrm{B}_r$ in the space $\mathrm{H}_0^1(\mathcal{O})$: $$\mathrm{B}_r := \big\{v \in \mathrm{H}_0^1(\mathcal{O}):\|v\|_{\mathrm{L}^{\infty}}\leq r\big\},$$ then, for any $u\in\mathrm{H}_0^1(\mathcal{O})$ and $v\in\mathrm{B}_r$, we have \begin{align}\label{3.7} &\langle F(u)-F(v),u-v\rangle +\left\{\frac{\alpha^2}{2\nu}r^2+\beta(1+\gamma+\gamma^2)\right\}\|u-v\|_{\mathrm{L}^2}^2\geq \frac{\nu}{2}\|\partial_x(u-v)\|_{\mathrm{L}^2}^2\geq 0. \end{align} \end{theorem} \begin{proof} Let us first consider $\langle F(u)-F(v),u-v\rangle$ and simplify it as \begin{align}\label{2.9} \langle F(u)-F(v),u-v\rangle &=\langle\nu Au+\alpha B(u)+\beta c(u)-(\nu Av+\alpha B(v)-\beta c(v)),u-v\rangle \nonumber\\&=\nu\|\partial_x(u-v)\|_{\mathrm{L}^2}^2+\alpha(B(u)-B(v),u-v)-\beta(c(u)-c(v),u-v). \end{align} Using an integration by parts, \eqref{6}, H\"older's and Young's inequalities, we estimate $|\alpha(B(u)-B(v),u-v)|$ as \begin{align}\label{2.10} |\alpha(B(u)-B(v),u-v)|&=\alpha|(u\partial_xu-v\partial_xv,u-v)|\nonumber\\&=\alpha|((u-v)\partial_xu,u-v)+(v\partial_x(u-v),u-v)|\nonumber\\&=\alpha\left|((u-v)\partial_xv,(u-v))+\frac{1}{2}(v,\partial_x(u-v)^2)\right|\nonumber\\&=\alpha\left|-\frac{1}{2}(v,\partial_x(u-v)^2)\right|=\alpha|-(v,(u-v)\partial_x(u-v))|\nonumber\\&\leq\alpha\|v\|_{\mathrm{L}^{\infty}}\|v-v\|_{\mathrm{L}^2}\|\partial_x(u-v)\|_{\mathrm{L}^2}\nonumber\\&\leq\frac{\nu}{2}\|\partial_x(u-v)\|_{\mathrm{L}^2}^2+\frac{\alpha^2}{2\nu}\|v\|_{\mathrm{L}^{\infty}}^2\|v-v\|_{\mathrm{L}^2}^2. \end{align} Let us now estimate $\beta(c(u)-c(v),u-v)$ as \iffalse \begin{align} &\beta(c(u)-c(v),u-v)\nonumber\\&=\beta((1+\gamma)(u^2-v^2)-\gamma(u-v)-(u^3-v^3),u-v)\nonumber\\&=\beta(1+\gamma)((u+v)(u-v),u-v)-\beta\gamma\|u-v\|_{\mathrm{L}^2}^2-\beta((u^2+uv+v^2)(u-v),u-v)\nonumber\\&=\beta(1+\gamma)((u-v)^2,u-v)+2\beta(1+\gamma)(v(u-v),u-v)-\beta\gamma\|u-v\|_{\mathrm{L}^2}^2\nonumber\\&\quad-\beta\|u-v\|_{\mathrm{L}^4}^4-3\beta(v^2(u-v),u-v)-3\beta(v(u-v)^2,u-v)\nonumber\\&\leq -\beta\gamma\|u-v\|_{\mathrm{L}^2}^2-\beta\|u-v\|_{\mathrm{L}^4}^4-3\beta(v^2(u-v),u-v)+\beta(1+\gamma)\|u-v\|_{\mathrm{L}^4}^2\|u-v\|_{\mathrm{L}^2}\nonumber\\&\quad+2\beta(1+\gamma)\|v\|_{\mathrm{L}^{\infty}}\|u-v\|_{\mathrm{L}^2}^2+3\beta\|v\|_{\mathrm{L}^{\infty}}\|u-v\|_{\mathrm{L}^4}^2\|u-v\|_{\mathrm{L}^2}\nonumber\\&\leq -\beta\gamma\|u-v\|_{\mathrm{L}^2}^2-\beta\|u-v\|_{\mathrm{L}^4}^4-3\beta(v^2(u-v),u-v)+\frac{\beta}{2}\|u-v\|_{\mathrm{L}^4}^4+\frac{\beta(1+\gamma)^2}{2}\|u-v\|_{\mathrm{L}^2}^2\nonumber\\&\quad+2\beta(1+\gamma)\|v\|_{\mathrm{L}^{\infty}}\|u-v\|_{\mathrm{L}^2}^2+\frac{\beta}{4}\|u-v\|_{\mathrm{L}^4}^4+{9\beta}\|v\|_{\mathrm{L}^{\infty}}^2\|u-v\|_{\mathrm{L}^2}^2, \end{align} \fi \begin{align}\label{2.11} &\beta(c(u)-c(v),u-v)\nonumber\\&=\beta((1+\gamma)(u^2-v^2)-\gamma(u-v)-(u^3-v^3),u-v)\nonumber\\&=\beta(1+\gamma)((u+v)(u-v),u-v)-\beta\gamma\|u-v\|_{\mathrm{L}^2}^2-\beta((u^2+uv+v^2)(u-v),u-v)\nonumber\\&=\beta(1+\gamma)((u+v)(u-v),u-v)-\beta\gamma\|u-v\|_{\mathrm{L}^2}^2-\beta\|u(u-v)\|_{\mathrm{L}^2}^2-\beta\|v(u-v)\|_{\mathrm{L}^2}^2\nonumber\\&\quad-\beta(uv(u-v),u-v)\nonumber\\&\leq\beta(1+\gamma)\left(\|u(u-v)\|_{\mathrm{L}^2}+\|v(u-v)\|_{\mathrm{L}^2}\right)\|u-v\|_{\mathrm{L}^2}\nonumber\\&\quad-\beta\gamma\|u-v\|_{\mathrm{L}^2}^2-\beta\|u(u-v)\|_{\mathrm{L}^2}^2-\beta\|v(u-v)\|_{\mathrm{L}^2}^2+\frac{\beta}{2}((u^2+v^2)(u-v),u-v)\nonumber\\&\leq {\beta(1+\gamma+\gamma^2)}\|u-v\|_{\mathrm{L}^2}^2, \end{align} where we used H\"older's and Young's inequalities. Combining \eqref{2.10}-\eqref{2.11} and then substituting it in \eqref{2.9}, we find \begin{align}\label{2p13} &\langle F(u)-F(v),u-v\rangle+\left\{\frac{\alpha^2}{2\nu}\|v\|_{\mathrm{L}^{\infty}}^2+\beta(1+\gamma+\gamma^2)\right\}\|u-v\|_{\mathrm{L}^2}^2 \geq \frac{\nu}{2}\|\partial_x(u-v)\|_{\mathrm{L}^2}^2\geq 0. \end{align} Using the fact that $v\in\mathrm{B}_r$, we get the required result \eqref{3.7}. \end{proof} \begin{corollary}\label{mon1} For any $u,v\in\mathrm{L}^2(0, T ; \mathrm{H}_0^1(\mathcal{O}))$ and continuous function $\rho(t)$ on $t\in(0,T)$, we have \begin{align}\label{3.11y} &\int_0^Te^{-\rho(t)}\langle F(u(t))-F(v(t)),u(t)-v(t) \rangle d t\nonumber\\&\quad +\int_0^Te^{-\rho(t)}\left\{\frac{\alpha^2}{2\nu}\|v(t)\|_{\mathrm{L}^{\infty}}^2+\beta(1+\gamma+\gamma^2)+\frac{L}{2}\right\}\|u(t)-v(t)\|_{\mathrm{L}^2}^2d t\nonumber\\&\geq \frac{1}{2}\int_0^Te^{-\rho(t)}\|\sigma(t, u) - \sigma(t, v)\|^2_{\mathcal{L}_{Q}}d t, \end{align} where we used Hypothesis \ref{hyp} (H.3). \end{corollary} \begin{lemma}\label{lem2.5} The operator $F(\cdot)=\nu A+\alpha B(\cdot)-\beta c(\cdot)$ is demicontinuous. \end{lemma} \begin{proof} Let us take a sequence $u^n\to u$ in $\mathrm{H}_0^1(\mathcal{O})$. For $v\in\mathrm{H}_0^1(\mathcal{O})$, we consider \begin{align}\label{214} \langle F(u^n)-F(u),v\rangle &=\nu\langle A(u^n)-A(u),v\rangle+\alpha(B(u^n)-B(u),v)-\beta(c(u^n)-c(u),v)\nonumber\\&=\nu(\partial_x(u^n-u),\partial_xv)+\alpha((u^n-u)(u^n+u),\partial_xv)\nonumber\\&\quad-\beta((u^n-u)\left[(u^n+u)(1+\gamma)-(\gamma+(u^n)^2+u^nu+u^2)\right],v)\nonumber\\&\leq\nu\|\partial_x(u^n-u)\|_{\mathrm{L}^2}\|\partial_xv\|_{\mathrm{L}^2}+\alpha\|u^n-u\|_{\mathrm{L}^4}\|u^n+u\|_{\mathrm{L}^4}\|\partial_xv\|_{\mathrm{L}^2}\nonumber\\&\quad+\beta(1+\gamma)\|u^n-u\|_{\mathrm{L}^2}\|u^n+u\|_{\mathrm{L}^2}\|v\|_{\mathrm{L}^{\infty}}+\beta\gamma\|u^n-u\|_{\mathrm{L}^2}\|v\|_{\mathrm{L}^2}\nonumber\\&\quad+\beta\|u^n-u\|_{\mathrm{L}^2}(\|u^n\|_{\mathrm{L}^4}^2+\|u^n\|_{\mathrm{L}^4}\|u\|_{\mathrm{L}^4}+\|u\|_{\mathrm{L}^4}^2)\|v\|_{\mathrm{L}^{\infty}}\nonumber\\&\leq \Big(\nu+\alpha(\|u^n\|_{\mathrm{L}^4}+\|u\|_{\mathrm{L}^4})+C\beta(1+\gamma)(\|u^n\|_{\mathrm{L}^2}+\|u\|_{\mathrm{L}^2})\nonumber\\&\quad+C\beta(\|u^n\|_{\mathrm{L}^4}^2+\|u^n\|_{\mathrm{L}^4}\|u\|_{\mathrm{L}^4}+\|u\|_{\mathrm{L}^4}^2)\Big)\|u^n-u\|_{\mathrm{H}_0^1}\|v\|_{\mathrm{H}^1_0}, \end{align} where we used H\"older's and Young's inequalities. As $\mathrm{H}_0^1(\mathcal{O})\subset\mathrm{L}^{\infty}(\mathcal{O})\subset\mathrm{L}^4(\mathcal{O}),$ the right hand side of the inequality \eqref{214} tends to zero as $n\to\infty$, since $u^n\to u$ in $\mathrm{H}_0^1(\mathcal{O})$. Hence the operator $F(\cdot)$ is demicontinuous, which implies that the operator $F(\cdot)$ is hemicontinuous. \end{proof} \subsection{Stochastic setting} In this subsection, we provide the definition and properties of the noise used in this work and also give the hypothesis satisfied by the noise coefficient. Let $(\Omega,\mathscr{F},\mathbb{P})$ be a complete probability space equipped with an increasing family of sub-sigma fields $\{\mathscr{F}_t\}_{0\leq t\leq T}$ of $\mathscr{F}$ satisfying \begin{enumerate} \item [(i)] $\mathscr{F}_0$ contains all elements $F\in\mathscr{F}$ with $\mathbb{P}(F)=0$, \item [(ii)] $\mathscr{F}_t=\mathscr{F}_{t+}=\bigcap\limits_{s>t}\mathscr{F}_s,$ for $0\leq t\leq T$. \end{enumerate} \begin{definition} A stochastic process $\{W(t)\}_{0\leq t\leq T}$ is said to be an \emph{$\mathrm{L}^2(\mathcal{O})$-valued $\mathscr{F}_t$-adapted Wiener process} with covariance operator $Q$ if \begin{enumerate} \item [$(i)$] for each non-zero $h\in \mathrm{H}$, $|Q^{\frac{1}{2}}h|^{-1} (W(t), h)$ is a standard one dimensional Wiener process, \item [$(ii)$] for any $h\in\mathrm{L}^2(\mathcal{O}), (W(t), h)$ is a martingale adapted to $\mathscr{F}_t$. \end{enumerate} \end{definition} The stochastic process $\{W(t) : 0\leq t\leq T\}$ is a $\mathrm{L}^2(\mathcal{O})$-valued Wiener process with covariance $Q$ if and only if for arbitrary $t$, the process $W(t)$ can be expressed as $W(t) =\sum\limits_{k=1}^{\infty}\sqrt{\mu_k}e_k(x)\beta_k(t)$, where $\beta_{k}(t),k\in\mathbb{N}$ are independent one dimensional Brownian motions on $(\Omega,\mathscr{F},\mathbb{P})$ and $\{e_k \}_{k=1}^{\infty}$ are the orthonormal basis functions of $\mathrm{L}^2(\mathcal{O})$ such that $Q e_k=\mu_k e_k$. If $W(\cdot)$ is an $\mathrm{L}^2(\mathcal{O})$-valued Wiener process with covariance operator $Q$ such that $\mathop{\mathrm{Tr}} Q=\sum\limits_{k=1}^{\infty} \mu_k< +\infty$, then $W(\cdot)$ is a Gaussian process on $\mathrm{L}^2(\mathcal{O})$ and $ \mathbb{E}[W(t)] = 0,$ $\textrm{Cov} [W(t)] = tQ,$ $t\geq 0.$ The space $\mathrm{L}^2_Q(\mathcal{O})=Q^{\frac{1}{2}}\mathrm{L}^2(\mathcal{O})$ is a Hilbert space equipped with the inner product $(\cdot, \cdot)_0$, $$(u, v)_0 =\sum_{k=1}^{\infty}\frac{1}{\lambda_k}(u,e_k)(v,e_k)= \left(Q^{-\frac{1}{2}}u, Q^{-\frac{1}{2}}v\right),\ \text{for all } \ u, v\in \mathrm{L}^2_Q(\mathcal{O}),$$ where $Q^{-\frac{1}{2}}$ is the pseudo-inverse of $Q^{\frac{1}{2}}$. Let $\mathcal{L}(\mathrm{L}^2(\mathcal{O}))$ denote the space of all bounded linear operators on $\mathrm{L}^2(\mathcal{O})$ and $\mathcal{L}_{Q}:=\mathcal{L}_{Q}(\mathrm{L}^2(\mathcal{O}))$ denote the space of all Hilbert-Schmidt operators from $\mathrm{L}^2_Q(\mathcal{O})=Q^{\frac{1}{2}}\mathrm{L}^2(\mathcal{O})$ to $\mathrm{L}^2(\mathcal{O})$. Since $Q$ is a trace class operator, the embedding of $\mathrm{L}^2_Q(\mathcal{O})$ in $\mathrm{L}^2(\mathcal{O})$ is Hilbert-Schmidt and the space $\mathcal{L}_{Q}$ is a Hilbert space equipped with the norm $$ \left\|\Psi\right\|^2_{\mathcal{L}_{Q}}=\mathop{\mathrm{Tr}}\left(\Psi {Q}\Psi^*\right)=\sum\limits_{k=1}^{\infty}\| {Q}^{1/2}\Psi^*e_k\|_{\mathrm{L}^2}^2 $$ and inner product $$ \left(\Psi,\Phi\right)_{\mathcal{L}_{Q}}=\mathop{\mathrm{Tr}}\left(\Psi {Q}\Phi^*\right)=\sum_{k=1}^{\infty}\left({Q}^{1/2}\Phi^*e_k,{Q}^{1/2}\Psi^*e_k\right) .$$ For more details, the interested readers are referred to see \cite{DaZ}. Let us assume that the noise coefficient $\sigma(\cdot,\cdot)$ satisfies the following hypothesis. \begin{hypothesis}\label{hyp} Let \begin{itemize} \item [(H.1)] the noise coefficient $\sigma\in\mathrm{C}([0,T]\times\mathrm{H}_0^1(\mathcal{O});\mathcal{L}_{Q}(\mathrm{L}^2(\mathcal{O})))$, \item[(H.2)] (growth condition) there exist a positive constant $K$ such that, for all $u\in \mathrm{L}^2(\mathcal{O})$ and $t\in[0,T]$, \begin{equation*} \|\sigma(t, u)\|^{2}_{\mathcal{L}_{Q}} \leq K\left(1 +\|u\|_{\mathrm{L}^2}^{2}\right), \end{equation*} \item[(H.3)] (Lipschitz condition) there exists a positive constant $L$ such that, for all $u_1, u_2\in \mathrm{H}$ and $t \in [0, T]$, \begin{align*}\|\sigma(t, u_1) - \sigma(t, u_2)\|^2_{\mathcal{L}_{Q}} \leq L\|u_1 - u_2\|_{\mathrm{L}^2}^2.\end{align*} \end{itemize} \end{hypothesis} With the above functional setting we rewrite the abstract formulation of the system \eqref{1.1}-\eqref{1.6} as \begin{equation}\label{abstract} \left\{ \begin{aligned} du(t)&=[- \nu Au(t)-\alpha B(u(t))+\beta c(u(t))]dt+\sigma(t,u(t))dW(t), \ t\in(0,T),\\ u(0)&=u_0, \end{aligned} \right. \end{equation} where $u_0\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^{2}(\mathcal{O}))$, for $p>2$. \iffalse \section{Mild Solution} \subsection{Local existence and uniqueness} With the above notations, one rewrite the abstract formulation of the system \eqref{1.1}, \eqref{1.5} and \eqref{1.6} as \begin{equation}\label{2.7} \left\{ \begin{aligned} du(t)&=[- Au(t)-\alpha B(u(t))+\beta c(u(t))]dt+\Phi dW(t), \ t\in(0,T),\\ u(0)&=u_0\in\mathrm{L}^{2\delta+1}(\mathcal{O}). \end{aligned} \right. \end{equation} Remember that the solution to the linear problem: \begin{equation}\label{2.8} \left\{ \begin{aligned} du(t)&=- Au(t)dt+\Phi dW(t), \ t\in(0,T),\\ u(0)&=u_0\in\mathrm{L}^{2\delta+1}(\mathcal{O}), \end{aligned} \right. \end{equation} is unique and is given by the stochastic convolution: \begin{align} W_A(t)=\int_0^tR(t-s)\Phi dW(s), \end{align} where $R(t)=e^{-\nu tA}$. Let us set \begin{align} v(t)=u(t)-W_A(t), \ t\geq 0. \end{align} Then $u(\cdot)$ is a solution to \eqref{2.7} if and only if $v(\cdot)$ is a solution of \begin{eqnarray}\label{2.13} \left\{ \begin{aligned} \frac{dv(t)}{dt}&=- Av(t)-\frac{\alpha}{\delta+1}\partial_x((v(t)+W_A(t))^{\delta+1})\\&\quad+\beta(v(t)+W_A(t))(1-(v(t)+W_A(t))^{\delta})((v(t)+W_A(t))^{\delta}-\gamma), \ t\in(0,T),\\mathbf{v}(0)&=u_0. \end{aligned} \right. \end{eqnarray} We rewrite \eqref{2.13} as \begin{align}\label{2.14} v(t)&=R(t)u_0-\frac{\alpha}{\delta+1}\int_0^tR(t-s)\partial_x((v(s)+W_A(s))^{\delta+1})ds\\&\quad+\beta\int_0^tR(t-s)(v(s)+W_A(s))(1-(v(s)+W_A(s))^{\delta})((v(s)+W_A(s))^{\delta}-\gamma)ds,\nonumber \end{align} then if $v(\cdot)$ satisfies \eqref{2.14}, we say that it is a mild solution of \eqref{2.13}. By using fixed point arguments in the space $\mathrm{C}([0,T^*];\mathrm{L}^{2\delta+1}(\mathcal{O}))$ for some $T^*>0$, we show the existence of mild solution to the system \eqref{2.13}. Let us set \begin{align} \Sigma(m,T^*)=\left\{v\in\mathrm{C}([0,T^*];\mathrm{L}^{2\delta+1}(\mathcal{O})):\|v(t)\|_{\mathrm{L}^{2\delta+1}(\mathcal{O})}\leq m, \ \text{ for all }\ t\in[0,T^*] \right\}. \end{align} Let the initial datum $u_0$ be $\mathscr{F}_0$-measurable and belongs to $\mathrm{L}^{2\delta+1}(\mathcal{O})$, $\mathbb{P}$-a.s. Let us now show that \eqref{2.14} has a meaning as an equality in $\mathrm{L}^{2\delta+1}(\mathcal{O})$ and establish the existence of a mild solution to \eqref{2.13}. \begin{theorem}[Local existence]\label{thm31} For $\|u_0\|_{\mathrm{L}^{2\delta+1}}<m$, there exists a stopping time $T^*$ such that \eqref{2.14} has a unique solution in $\Sigma(m,T^*)$. \end{theorem} \begin{proof} Let us take any $v\in\Sigma(m,T^*)$ and define $z=Gv$ by \begin{align}\label{3.16} z(t)&=R(t)u_0-\frac{\alpha}{\delta+1}\int_0^tR(t-s)\partial_x((v(s)+W_A(s))^{\delta+1})ds\\&\quad+\beta\int_0^tR(t-s)(v(s)+W_A(s))(1-(v(s)+W_A(s))^{\delta})((v(s)+W_A(s))^{\delta}-\gamma)ds.\nonumber \end{align} Then, we have \begin{align}\label{3.17} & \|z(t)\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\leq \|R(t)u_0\|_{\mathrm{L}^{2\delta+1}}+\frac{\alpha}{\delta+1}\int_0^t\left\|R(t-s)\partial_x((v(s)+W_A(s))^{\delta+1})\right\|_{\mathrm{L}^{2\delta+1}}ds\\&\quad+\beta\int_0^t\left\|R(t-s)(v(s)+W_A(s))(1-(v(s)+W_A(s))^{\delta})((v(s)+W_A(s))^{\delta}-\gamma)\right\|_{\mathrm{L}^{2\delta+1}}ds.\nonumber \end{align} Remember that $e^{-tA}$ is a contraction semigroup on $\mathrm{L}^{2\delta+1}(\mathcal{O})$. In order to estimate the terms in the right hand side of the inequality (\ref{3.17}), the following Sobolev embedding is needed: \begin{eqnarray}\label{sobolev} \|u\|_{\mathrm{L}^{q_1}}\leq C\|u\|_{\mathrm{W}^{k,q_2}},\ \text{ whenever }\ k<\frac{1}{q_2}, \end{eqnarray} where $\frac{1}{q_1}=\frac{1}{q_2}-k$ (Theorem 6, page 284, \cite{LCE}). We also need a smoothing property of the heat semigroup, i.e., for any $r_1\leq r_2$ in $\mathbb{R},$ and $\theta\geq 1,$ $R(t)$ maps $\mathrm{W}^{r_1,\theta}(\mathcal{O})$ into $\mathrm{W}^{r_2,\theta}(\mathcal{O}),$ for all $t>0.$ Furthermore, the following estimate holds (see Lemma 3, Part I, \cite{FR}, \cite{GDP}) \begin{eqnarray}\label{smoothing} \|R(t)u\|_{\mathrm{W}^{r_2,\theta}}\leq C(t^{\frac{r_1-r_2}{2}}+1)\|u\|_{\mathrm{W}^{r_1,\theta}}, \end{eqnarray} for all $u\in {\mathrm{W}^{r_1,\theta}(\mathcal{O})},$ where $C=C(r_1,r_2,\theta)$ is a positive constant. Applying (\ref{sobolev}) with $ q_1=2\delta+1$, $q_2=1$ and $k=\frac{2\delta}{2\delta+1},$ and then using the smoothing property (\ref{smoothing}) with $r_1=-1$, $r_2= \frac{2\delta}{2\delta+1}$ and $\theta=1$, we evaluate \begin{align}\label{3.76} & \|R(t-s)\partial_x((v(s)+W_A(s))^{\delta+1})\|_{\mathrm{L}^{2\delta+1}}\nonumber\\& \leq C\|R(t-s)\partial_x((v(s)+W_A(s))^{\delta+1})\|_{\mathrm{W}^{\frac{2\delta}{2\delta+1},{1}}}\nonumber\\ &\leq C(1+(t-s)^{\frac{-1-4\delta}{2(2\delta+1)}})\|\partial_x((v(s)+W_A(s))^{\delta+1})\|_{\mathrm{W}^{{-1},{1}}}\nonumber\\ &=C(1+(t-s)^{\frac{-1-4\delta}{2(2\delta+1)}})\|((v(s)+W_A(s))^{\delta+1})\|_{\mathrm{L}^{1}}\nonumber\\& =C(1+(t-s)^{\frac{-1-4\delta}{2(2\delta+1)}})\|v(s)+W_A(s)\|_{\mathrm{L}^{\delta+1}}^{\delta+1}. \end{align} Taking $r_1=0,$ $r_2=\frac{2\delta}{2\delta+1}$ and $\theta=1$, we also obtain \begin{align}\label{3.21} & \|R(t-s)(v(s)+W_A(s))^{\delta+1}\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\leq C\|R(t-s)(v(s)+W_A(s))^{\delta+1}\|_{\mathrm{W}^{\frac{2\delta}{2\delta+1},{1}}}\nonumber\\ &\leq C(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|(v(s)+W_A(s))^{\delta+1}\|_{\mathrm{L}^{1}}\nonumber\\&\leq C(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|v(s)+W_A(s)\|_{\mathrm{L}^{\delta+1}}^{\delta+1}. \end{align} Similarly, we estimate the terms $\|R(t-s)(u(s))^{2\delta+1}\|_{\mathrm{L}^{2\delta+1}}$ as \begin{align} \|R(t-s)(v(s)+W_A(s))^{2\delta+1}\|_{\mathrm{L}^{2\delta+1}}&\leq C(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|v(s)+W_A(s)\|_{\mathrm{L}^{2\delta+1}}^{2\delta+1}.\label{3.78} \end{align} Combining \eqref{3.76}-\eqref{3.78} and substituting it in \eqref{3.17} yields \begin{align} & \|z(t)\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\leq\|u_0\|_{\mathrm{L}^{2\delta+1}}+\frac{C\alpha}{\delta+1}\int_0^t(1+(t-s)^{\frac{-1-4\delta}{2(2\delta+1)}})\|v(s)+W_A(s)\|_{\mathrm{L}^{\delta+1}}^{\delta+1}ds\nonumber\\&\quad+\beta\gamma\int_0^t\|v(s)+W_A(s)\|_{\mathrm{L}^{2\delta+1}}ds+C\beta(1+\gamma)\int_0^t(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|v(s)+W_A(s)\|_{\mathrm{L}^{\delta+1}}^{\delta+1}ds\nonumber\\&\quad+C\beta\int_0^t(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|v(s)+W_A(s)\|_{\mathrm{L}^{2\delta+1}}^{2\delta+1}ds\nonumber\\&\leq\|u_0\|_{\mathrm{L}^{2\delta+1}}+\frac{C\alpha}{\delta+1}\left(\sup_{s\in[0,t]}\|v(s)\|_{\mathrm{L}^{\delta+1}}+\sup_{s\in[0,t]}\|W_A(s)\|_{\mathrm{L}^{\delta+1}}\right)^{\delta+1}\left(t+2(2\delta+1)t^{\frac{1}{2(2\delta+1)}}\right)\nonumber\\&\quad+\beta\gamma t\left(\sup_{s\in[0,t]}\|v(s)\|_{\mathrm{L}^{2\delta+1}}+\sup_{s\in[0,t]}\|W_A(s)\|_{\mathrm{L}^{2\delta+1}}\right)\nonumber\\&\quad+C\beta(1+\gamma)\left(\sup_{s\in[0,t]}\|v(s)\|_{\mathrm{L}^{\delta+1}}+\sup_{s\in[0,t]}\|W_A(s)\|_{\mathrm{L}^{\delta+1}}\right)^{\delta+1}\left(t+\frac{2\delta+1}{\delta+1}t^{\frac{\delta+1}{2\delta+1}}\right)\nonumber\\&\quad+C\beta\left(\sup_{s\in[0,t]}\|v(s)\|_{\mathrm{L}^{2\delta+1}}+\sup_{s\in[0,t]}\|W_A(s)\|_{\mathrm{L}^{2\delta+1}}\right)^{2\delta+1}\left(t+\frac{2\delta+1}{\delta+1}t^{\frac{\delta+1}{2\delta+1}}\right)\nonumber\\&\leq\|u_0\|_{\mathrm{L}^{2\delta+1}}+\frac{C\alpha}{\delta+1}\left(t+2(2\delta+1)t^{\frac{1}{2(2\delta+1)}}\right)\left(m+\mu_{\delta+1}\right)^{\delta+1}+\beta\gamma t(m+\mu_2)\nonumber\\&\quad+C\beta(1+\gamma)\left(m+\mu_{\delta+1}\right)^{\delta+1}\left(t+\frac{2\delta+1}{\delta+1}t^{\frac{\delta+1}{2\delta+1}}\right)+C\beta\left(m+\mu_{2\delta+1}\right)^{2\delta+1}\left(t+\frac{2\delta+1}{\delta+1}t^{\frac{\delta+1}{2\delta+1}}\right), \end{align} where $$\mu_p=\sup_{t\in[0,T]}\|W_A(t)\|_{\mathrm{L}^p}.$$ Thus, $\|z(t)\|_{\mathrm{L}^{2\delta+1}}\leq m$, for all $t\in[0,T^*]$, provided \begin{align}\label{3.24} &\|u_0\|_{\mathrm{L}^{2\delta+1}}+\frac{C\alpha}{\delta+1}\left(t+2(2\delta+1)t^{\frac{1}{2(2\delta+1)}}\right)\left(m+\mu_{\delta+1}\right)^{\delta+1}+\beta\gamma t(m+\mu_2)\nonumber\\&\quad+C\beta(1+\gamma)\left(m+\mu_{\delta+1}\right)^{\delta+1}\left(t+\frac{2\delta+1}{\delta+1}t^{\frac{\delta+1}{2\delta+1}}\right)+C\beta\left(m+\mu_{2\delta+1}\right)^{2\delta+1}\left(t+\frac{2\delta+1}{\delta+1}t^{\frac{\delta+1}{2\delta+1}}\right)\nonumber\\&\leq m. \end{align} Since $\|u_0\|_{\mathrm{L}^{2\delta+1}}<m$, there exists a $T^*$ satisfying \eqref{3.24}. Let us now consider $v_1,v_2\in\Sigma(m,T^*)$ and set $z_i=Gv_i$, for $i=1,2$ and $z=z_1-z_2$. Then $z(t)$ satisfies \begin{align}\label{3.25} z(t)&=\frac{\alpha}{\delta+1}\int_0^tR(t-s)\partial_x((v_1(s)+W_A(s))^{\delta+1}-(v_2(s)+W_A(s))^{\delta+1})ds\nonumber\\&\quad+\beta\int_0^tR(t-s)\Big\{\left[(v_1(s)+W_A(s))(1-(v_1(s)+W_A(s))^{\delta})((v_1(s)+W_A(s))^{\delta}-\gamma)\right]\nonumber\\&\qquad-\left[(v_2(s)+W_A(s))(1-(v_2(s)+W_A(s))^{\delta})((v_2(s)+W_A(s))^{\delta}-\gamma)\right]\Big\}ds. \end{align} Using Taylor's formula, for some $0<\theta_21<1$, we have \begin{align} &(v_1+W_A)^{\delta+1}-(v_2+W_A)^{\delta+1}\nonumber\\&=(\delta+1)(v_1-v_2)(\theta_1(v_1+W_A)+(1-\theta_1)(v_2+W_A))^{\delta}. \end{align} Similarly, for some $0<\theta_2<1$, we get \begin{align*} & \left[(v_1+W_A)(1-(v_1+W_A)^{\delta})((v_1+W_A)^{\delta}-\gamma)\right]\nonumber\\&\quad-\left[(v_2+W_A)(1-(v_2+W_A)^{\delta})((v_2+W_A)^{\delta}-\gamma)\right]\nonumber\\&= (v_1-v_2)(1-(v_1+W_A)^{\delta})((v_1+W_A)^{\delta}-\gamma)\nonumber\\&\quad+(v_2+W_A)((v_1+W_A)^{\delta}-(v_2+W_A)^{\delta})(1+\gamma-((v_1+W_A)^{\delta}+(v_2+W_A)^{\delta}))\nonumber\\&= (v_1-v_2)((1+\gamma)(v_1+W_A)^{\delta}-\gamma-(v_1+W_A)^{2\delta})\nonumber\\&\quad+\delta(v_1-v_2)(v_2+W_A)(\theta_2 (v_1+W_A)+(1-\theta_2)(v_2+W_A))^{\delta-1}\nonumber\\&\qquad\times(1+\gamma-((v_1+W_A)^{\delta}+(v_2+W_A)^{\delta}))\nonumber\\&=(1+\gamma)z(v_1+W_A)^{\delta}-\gamma z-z(v_1+W_A)^{2\delta}\nonumber\\&\quad+\delta(1+\gamma)z(v_2+W_A)(\theta_2 (v_1+W_A)+(1-\theta_2)(v_2+W_A))^{\delta-1}\nonumber\\&\quad-\delta z(v_1+W_A)^{\delta}(v_2+W_A)(\theta_2 (v_1+W_A)+(1-\theta_2)(v_2+W_A))^{\delta-1}\nonumber\\&\quad-\delta z(v_2+W_A)^{\delta+1}(\theta_2 (v_1+W_A)+(1-\theta_2)(v_2+W_A))^{\delta-1}. \end{align*} A calculation similar to \eqref{3.76} yields \begin{align}\label{3.27} &\|R(t-s)\partial_x((v_1(s)+W_A(s))^{\delta+1}-(v_2(s)+W_A(s))^{\delta+1})\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&=(\delta+1)\|R(t-s)\partial_x\left[z(s)((\theta_1(v_1(s)+W_A(s))+(1-\theta_1)(v_2(s)+W_A(s)))^{\delta})\right]\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\leq C(\delta+1)(1+(t-s)^{\frac{-1-4\delta}{2(2\delta+1)}})\|z(s)\|_{\mathrm{L}^{\delta+1}}\|\theta_1(v_1(s)+W_A(s))+(1-\theta_1)(v_2(s)+W_A(s))\|_{\mathrm{L}^{\delta+1}}^{\delta}\nonumber\\&\leq C(\delta+1)(1+(t-s)^{\frac{-1-4\delta}{2(2\delta+1)}})\|z(s)\|_{\mathrm{L}^{\delta+1}}\left(\|v_1(s)\|_{\mathrm{L}^{\delta+1}}+\|v_2(s)\|_{\mathrm{L}^{\delta+1}}+2\|W_A(s)\|_{\mathrm{L}^{\delta+1}}\right)^{\delta}\nonumber\\&\leq 2C(\delta+1)(m+\mu_{\delta+1})(1+(t-s)^{\frac{-1-4\delta}{2(2\delta+1)}})\|z(s)\|_{\mathrm{L}^{\delta+1}}. \end{align} Similar to the estimate \eqref{3.21}, we have \begin{align}\label{3.28} & \Big\|R(t-s)\Big\{\left[(v_1(s)+W_A(s))(1-(v_1(s)+W_A(s))^{\delta})((v_1(s)+W_A(s))^{\delta}-\gamma)\right]\nonumber\\&\qquad-\left[(v_2(s)+W_A(s))(1-(v_2(s)+W_A(s))^{\delta})((v_2(s)+W_A(s))^{\delta}-\gamma)\right]\Big\}\Big\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\leq C(1+\gamma)(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|v_1(s)+W_A(s)\|_{\mathrm{L}^{\delta+1}}^{\delta}\|z(s)\|_{\mathrm{L}^{\delta+1}}\nonumber\\&\quad+\gamma\|z(s)\|_{\mathrm{L}^{2\delta+1}}+C(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|v(s)+W_A(s)\|_{\mathrm{L}^{2\delta+1}}^{2\delta}\|z(s)\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\quad+C\delta(1+\gamma)(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|v_2(s)+W_A(s)\|_{\mathrm{L}^{\delta+1}}\nonumber\\&\qquad\times\left(\|v_1(s)\|_{\mathrm{L}^{\delta+1}}+\|v_2(s)\|_{\mathrm{L}^{\delta+1}}+2\|W_A(s)\|_{\mathrm{L}^{\delta+1}}\right)^{\delta-1}\|z(s)\|_{\mathrm{L}^{\delta+1}}\nonumber\\&\quad+C\delta(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|v_1(s)+W_A(s)\|_{\mathrm{L}^{2\delta+1}}^{\delta}\|v_2(s)+W_A(s)\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\qquad\times\left(\|v_1(s)\|_{\mathrm{L}^{2\delta+1}}+\|v_2(s)\|_{\mathrm{L}^{2\delta+1}}+2\|W_A(s)\|_{\mathrm{L}^{2\delta+1}}\right)^{\delta-1}\|z(s)\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\quad+C\delta(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|v_2(s)+W_A(s)\|_{\mathrm{L}^{2\delta+1}}^{\delta+1}\nonumber\\&\qquad\times\left(\|v_1(s)\|_{\mathrm{L}^{2\delta+1}}+\|v_2(s)\|_{\mathrm{L}^{2\delta+1}}+2\|W_A(s)\|_{\mathrm{L}^{2\delta+1}}\right)^{\delta-1}\|z(s)\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\leq C(1+\gamma)(1+2\delta)(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})(m+\mu_{\delta+1})^{\delta}\|z(s)\|_{\mathrm{L}^{\delta+1}}+\gamma\|z(s)\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\quad+4C\delta(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})(m+\mu_{2\delta+1})^{2\delta}\|z(s)\|_{\mathrm{L}^{2\delta+1}}. \end{align} Combining \eqref{3.27}-\eqref{3.28} and substituting it in \eqref{3.25}, we obtain \begin{align} \|z(t)\|_{\mathrm{L}^{2\delta+1}}&\leq 2C\alpha(m+\mu_{\delta+1})\int_0^t(1+(t-s)^{\frac{-1-4\delta}{2(2\delta+1)}})\|z(s)\|_{\mathrm{L}^{\delta+1}}d\\&\quad+C(1+\gamma)(1+2\delta)(m+\mu_{\delta+1})^{\delta}\int_0^t(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|z(s)\|_{\mathrm{L}^{\delta+1}}ds\nonumber\\&\quad+\gamma\int_0^t\|z(s)\|_{\mathrm{L}^{2\delta+1}}ds+4C\delta(m+\mu_{2\delta+1})^{2\delta}\int_0^t(1+(t-s)^{\frac{-2\delta}{2(2\delta+1)}})\|z(s)\|_{\mathrm{L}^{2\delta+1}}ds.\nonumber \end{align} Hence, we get \begin{align} \sup_{s\in[0,t]}\|z(s)\|_{\mathrm{L}^{2\delta+1}}&\leq 2C\alpha(m+\mu_{\delta+1})\left(t+2(2\delta+1)t^{\frac{1}{2(2\delta+1)}}\right) \sup_{s\in[0,t]}\|z(s)\|_{\mathrm{L}^{\delta+1}}+\gamma t\sup_{s\in[0,t]}\|z(s)\|_{\mathrm{L}^{2\delta+1}}\nonumber\\&\quad +C(1+\gamma)(1+2\delta)(m+\mu_{\delta+1})^{\delta}\left(t+\frac{2\delta+1}{\delta+1}t^{\frac{\delta+1}{2\delta+1}}\right)\sup_{s\in[0,t]}\|z(s)\|_{\mathrm{L}^{\delta+1}}\nonumber\\&\quad+4C\delta(m+\mu_{2\delta+1})^{2\delta}\left(t+\frac{2\delta+1}{\delta+1}t^{\frac{\delta+1}{2\delta+1}}\right)\sup_{s\in[0,t]}\|z(s)\|_{\mathrm{L}^{2\delta+1}} \end{align} We can choose a $T^*$ such that \begin{align} &2C\alpha(m+\mu_{\delta+1})\left(t+2(2\delta+1)t^{\frac{1}{2(2\delta+1)}}\right) +\gamma t\nonumber\\&\quad +C\left[(1+\gamma)(1+2\delta)(m+\mu_{\delta+1})^{\delta}+4\delta(m+\mu_{2\delta+1})^{2\delta}\right]\left(t+\frac{2\delta+1}{\delta+1}t^{\frac{\delta+1}{2\delta+1}}\right)<1, \end{align} and \eqref{3.24} holds true. Hence, $G$ is a strict contraction on $\Sigma(m,T^*)$ and it proves the existence of a mild solution to \eqref{2.13}. Uniqueness follows form the representation \eqref{2.14}. \end{proof} \subsection{Global existence and uniqueness} Due to technical difficulties, we could able to prove the global existence of Burgers-Huxley equation only, i.e., the system \eqref{2.13} with $\delta=1$ (for any $\nu>0$ and $\beta>0$). The result obtained in Theorem \ref{thm3.1} is valid $\mathbb{P}$-a.s., as $\mu_{{3}}$ and $T^*$ depends on $\omega\in\Omega$. In this subsection, we show that $T^*=T$, $\mathbb{P}$-a.s., and hence we can remove the dependence on $\omega$ for the time interval on which the solution exists. In order to prove our main result, we need the following lemma. \begin{lemma}\label{lem32} If $v\in\mathrm{C}([0,T];\mathrm{L}^{3}(\mathcal{O}))$ satisfies \eqref{2.14}, then \begin{align}\label{3.32} \|v(t)\|_{\mathrm{L}^{3}}&\leq \left(\|u_0\|_{\mathrm{L}^3}^3+\left(\frac{2}{3}\right)^{2}\left(\beta(1+\gamma)\mu_{\infty}^2+\beta(1+\gamma)\mu_{\infty}\right)+\left(\frac{1}{3}\right)^{\frac{1}{3}}\frac{\alpha^2}{\nu}\mu_{\infty}^4\right)\nonumber\\&\quad\times\exp\bigg\{3t\bigg(\frac{2\alpha^2}{\nu}\mu_{\infty}^2+2\beta(1+\gamma)\mu_{\infty}+\beta(1+\gamma)^2+12\beta\mu_{\infty}^2\nonumber\\&\qquad+\left(\beta(1+\gamma)\mu_{\infty}^2+\beta(1+\gamma)\mu_{\infty}\right)+\frac{\alpha^2}{2\nu}\mu_{\infty}^4\bigg)\bigg\}, \end{align} for all $t\in[0,T]$, where $\mu_{\infty}=\sup\limits_{t\in[0,T]}\|W_A(t)\|_{\mathrm{L}^{\infty}}$. \end{lemma} \begin{proof} Let $u_0^n$ be a sequence in $\mathrm{C}^{\infty}(\mathcal{O})$ such that $$u_0^n\to u_0, \ \text{ in }\ \mathrm{L}^{2\delta+1}(\mathcal{O}).,$$ and let $\{W^n\}$ be a sequence of regular processes such that $$W_A^n(t)=\int_0^tR(t-s)dW^n(s)\to W_A(t)$$ in $\mathrm{C}([0,1]\times[0,1])$, $\mathbb{P}$-a.s. Let $v_n(\cdot)$ be a solution of \begin{align}\label{3.33} v^n(t)&=R(t)u_0^n-\frac{\alpha}{2}\int_0^tR(t-s)\partial_x((v^n(s)+W_A^n(s))^{2})ds\\&\quad+\beta\int_0^tR(t-s)(v^n(s)+W_A^n(s))(1-(v^n(s)+W_A^n(s)))((v^n(s)+W_A^n(s))-\gamma)ds.\nonumber \end{align} Making use of Theorem \ref{thm31}, we know that $v^n$ exists on an interval $[0,T_n]$ such that $T_n\to T^*$, $\mathbb{P}$-a.s., and that $v^n$ converges to $v$ in $\mathrm{C}([0,T];\mathrm{L}^{3}(\mathcal{O}))$, $\mathbb{P}$-a.s. Furthermore, $v^n(\cdot)$ is regular $\mathbb{P}$-a.s. and satisfies: \begin{align}\label{3.34} \frac{\partial v^n}{\partial t}=\nu\frac{\partial^2v^n}{\partial x^2}-\alpha (v^n+W_A^n)\frac{\partial }{\partial x}(v^n+W_A^n)+\beta (v^n+W_A^n)(1-(v^n+W_A^n))((v^n+W_A^n)-\gamma). \end{align} \iffalse Taking inner product with $v^n$, applying integration by parts and then using Taylor's formula, we get \begin{align} &\frac{1}{2}\frac{d}{dt}\|v^n(t)\|_{\mathrm{L}^2}^2+\nu\|\partial_xv^n(t)\|_{\mathrm{L}^2}^2\nonumber\\&= -\alpha( (v^n(t)+W_A^n(t))^{\delta}\partial_x(v^n(t)+W_A^n(t)),v^n(t))\nonumber\\&\quad +\beta ((v^n(t)+W_A^n(t))(1-(v^n(t)+W_A^n(t))^{\delta})((v^n(t)+W_A^n(t))^{\delta}-\gamma),v^n(t))\nonumber\\&=-\frac{\alpha}{\delta+1}(\partial_x(v^n(t)+W_A^n(t))^{\delta+1},v^n(t))+\beta(1+\gamma)((v^n(t)+W_A^n(t))^{\delta+1},v^n(t))\nonumber\\&\quad-\beta\gamma((v^n(t)+W_A^n(t)),v^n(t))-\beta((v^n(t)+W_A^n(t))^{2\delta+1},v^n(t))\nonumber\\&=\frac{\alpha}{\delta+1}((v^n(t)+W_A^n(t))^{\delta+1},\partial_xv^n(t))+\beta(1+\gamma)((v^n(t)+W_A^n(t))^{\delta+1},v^n(t))\nonumber\\&\quad-\beta\gamma\|v^n(t)\|_{\mathrm{L}^2}^2-\beta\gamma(W_A^n(t),v^n(t))-\beta\|v^n(t)\|_{\mathrm{L}^{2(\delta+1)}}^{2(\delta+1)}\nonumber\\&\quad+\beta(2\delta+1)(W_A^n(t)(\theta_2v^n(t)+(1-\theta_2)W_A^n(t))^{\delta},v^n(t)). \end{align} Thus, it is immediate that \begin{align} &\frac{1}{2}\frac{d}{dt}\|v^n(t)\|_{\mathrm{L}^2}^2+\nu\|\partial_xv^n(t)\|_{\mathrm{L}^2}^2+\beta\gamma\|v^n(t)\|_{\mathrm{L}^2}^2+\beta\|v^n(t)\|_{\mathrm{L}^{2(\delta+1)}}^{2(\delta+1)}\nonumber\\&=\frac{\alpha}{\delta+1}((v^n(t)+W_A^n(t))^{\delta+1},\partial_xv^n(t))+\beta(1+\gamma)((v^n(t)+W_A^n(t))^{\delta+1},v^n(t))\nonumber\\&\quad-\beta\gamma(W_A^n(t),v^n(t))+\beta(2\delta+1)(W_A^n(t)(\theta_2v^n(t)+(1-\theta_2)W_A^n(t))^{\delta},v^n(t)). \end{align} \fi Multiplying \eqref{3.34} by $|v^n|^{p-2}v^n$, integrating over $(0,1)$ and then using Taylor's formula, we find \begin{align}\label{3.35} &\frac{1}{p}\frac{d}{d t}\|v^n(t)\|_{\mathrm{L}^{p}}^p+\nu(p-1)\||v_n(t)|^{\frac{p-2}{2}}\partial_xv^n(t)\|_{\mathrm{L}^2}^2\nonumber\\&=-\alpha((v^n(t)+W_A^n(t))\partial_x(v^n(t)+W_A^n(t)),|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad +\beta(1+\gamma)((v^n(t)+W_A^n(t))^{2},|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad-\beta\gamma(v^n(t)+W_A^n(t),|v^n(t)|^{p-2}v^n(t))-\beta((v^n(t)+W_A^n(t))^{3},|v^n(t)|^{p-2}v^n(t))\nonumber\\&=-\frac{\alpha}{2}(\partial_x(v^n(t)+W_A^n(t))^{2},|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad +\beta(1+\gamma)((v^n(t)+W_A^n(t))^{2},|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad-\beta\gamma\|v^n(t)\|_{\mathrm{L}^p}^p-\beta\gamma(W_A^n(t),|v^n(t)|^{p-2}v^n(t))-\beta\|v^n(t)\|_{\mathrm{L}^{2+p}}^{2+p}\nonumber\\&\quad-3\beta(v^n(t)^2W_A^n(t),|v^n(t)|^{p-2}v^n(t))-3\beta(v^n(t)W_A^n(t)^2,|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad-\beta(W_A^n(t)^3,|v^n(t)|^{p-2}v^n(t)). \end{align} Using an integration by parts, it can be easily deduced from \eqref{3.35} that \begin{align}\label{3.36} &\frac{1}{p}\frac{d}{d t}\|v^n(t)\|_{\mathrm{L}^{p}}^p+\nu(p-1)\||v_n(t)|^{\frac{p-2}{2}}\partial_xv^n(t)\|_{\mathrm{L}^2}^2+\beta\gamma\|v^n(t)\|_{\mathrm{L}^p}^p+\beta\|v^n(t)\|_{\mathrm{L}^{2+p}}^{2+p}\nonumber\\&=\frac{\alpha(p-1)}{2}(v^n(t)^2,|v^n(t)|^{p-2}\partial_xv^n(t))+\alpha(p-1)(v^n(t)W_A^n(t),|v^n(t)|^{p-2}\partial_xv^n(t))\nonumber\\&\quad+\frac{\alpha(p-1)}{2}(W_A^n(t)^2,|v^n(t)|^{p-2}\partial_xv^n(t))+\beta(1+\gamma)(W_A^n(t)^2,|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad+2\beta(1+\gamma)(W_A^n(t)v^n(t),|v^n(t)|^{p-2}v^n(t))+\beta(1+\gamma)(v^n(t)^2,|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad-\beta\gamma(W_A^n(t),|v^n(t)|^{p-2}v^n(t))-3\beta(v^n(t)^2W_A^n(t),|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad-3\beta(v^n(t)W_A^n(t)^2,|v^n(t)|^{p-2}v^n(t))-\beta(W_A^n(t)^3,|v^n(t)|^{p-2}v^n(t))=:\sum_{j=1}^{10}I_j, \end{align} where $I_j$, $j=1,\ldots,10$ denotes the terms appearing in the right hand side of the equality \eqref{3.36}. Note that $I_1=0$, by using \eqref{7a}. Let us estimate $I_j$, $j=2,\ldots,10$, using H\"older's, interpolation and Young's inequalities as \begin{align}\label{3.37} I_2&\leq \alpha(p-1)\||v^n|^{\frac{p}{2}}W_A^n\|_{\mathrm{L}^2}\||v^n|^{\frac{p-2}{2}}\partial_xv^n\|_{\mathrm{L}^2}\nonumber\\&\leq \frac{\nu(p-1)}{4}\||v^n|^{\frac{p-2}{2}}\partial_xv^n\|_{\mathrm{L}^2}^2+\frac{\alpha^2(p-1)}{\nu}\|W_A^n\|_{\mathrm{L}^{\infty}}^2\|v^n\|_{\mathrm{L}^p}^p,\\ I_3&\leq \frac{\alpha(p-1)}{2}\||v^n|^{\frac{p-2}{2}}(W_A^n)^2\|_{\mathrm{L}^2}\||v^n|^{\frac{p-2}{2}}\partial_xv^n\|_{\mathrm{L}^2}\nonumber\\&\leq\frac{\nu(p-1)}{4}\||v^n|^{\frac{p-2}{2}}\partial_xv^n\|_{\mathrm{L}^2}^2+\frac{\alpha^2(p-1)}{4\nu}\|W_A^n\|_{\mathrm{L}^{\infty}}^4\|v^n\|_{\mathrm{L}^{p-2}}^{p-2},\\ I_4&\leq\beta(1+\gamma)\|W_A^n\|_{\mathrm{L}^{\infty}}^2\|v^n\|_{\mathrm{L}^{p-1}}^{p-1},\\ I_5&\leq 2\beta(1+\gamma)\|W_A^n\|_{\mathrm{L}^{\infty}}\|v^n\|_{\mathrm{L}^p}^p,\\ I_6&\leq\beta(1+\gamma)\|v^n\|_{\mathrm{L}^{p+1}}^{p+1}\leq\beta(1+\gamma)\|v^n\|_{\mathrm{L}^p}^{\frac{p}{2}}\|v^n\|_{\mathrm{L}^{p+2}}^{\frac{p+2}{2}}\nonumber\\&\leq\frac{\beta}{4}\|v^n\|_{\mathrm{L}^{p+2}}^{p+2}+\beta(1+\gamma)^2\|v^n\|_{\mathrm{L}^p}^p,\\ I_7&\leq\beta\gamma\|W_A^n\|_{\mathrm{L}^{\infty}}\|v^n\|_{\mathrm{L}^{p-1}}^{p-1},\\ I_8&\leq 3\beta\|W_A^n\|_{\mathrm{L}^{\infty}}\|v^n\|_{\mathrm{L}^{p+1}}^{p+1}\leq\frac{\beta}{4}\|v^n\|_{\mathrm{L}^{p+2}}^{p+2}+9\beta\|W_A^n\|_{\mathrm{L}^{\infty}}^2\|v^n\|_{\mathrm{L}^p}^p,\\ I_9&\leq 3\beta\|W_A^n\|_{\mathrm{L}^{\infty}}^2\|v^n\|_{\mathrm{L}^p}^p,\\ I_{10}&\leq\beta \|W_A^n\|_{\mathrm{L}^{\infty}}\|v^n\|_{\mathrm{L}^{p-1}}^{p-1}.\label{3.45} \end{align} Combining \eqref{3.37}-\eqref{3.45} and substituting it in \eqref{3.36}, we get \begin{align} &\frac{1}{p}\frac{d}{d t}\|v^n(t)\|_{\mathrm{L}^{p}}^p+\frac{\nu(p-1)}{2}\||v_n(t)|^{\frac{p-2}{2}}\partial_xv^n(t)\|_{\mathrm{L}^2}^2+\beta\gamma\|v^n(t)\|_{\mathrm{L}^p}^p+\frac{\beta}{2}\|v^n(t)\|_{\mathrm{L}^{2+p}}^{2+p}\nonumber\\&\leq \left(\frac{\alpha^2(p-1)}{\nu}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^2+2\beta(1+\gamma)\|W_A^n(t)\|_{\mathrm{L}^{\infty}}+\beta(1+\gamma)^2+12\beta\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^2\right)\|v^n(t)\|_{\mathrm{L}^p}^p\nonumber\\&\quad+\left(\beta(1+\gamma)\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^2+\beta(1+\gamma)\|\mathrm{W}_A^n(t)\|_{\mathrm{L}^{\infty}}\right)\|v^n(t)\|_{\mathrm{L}^{p-1}}^{p-1}\nonumber\\&\quad+\frac{\alpha^2(p-1)}{4\nu}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^4\|v^n(t)\|_{\mathrm{L}^{p-2}}^{p-2}. \end{align} Integrating the above inequality from $0$ to $t$, we find \begin{align}\label{3.47} &\|v^n(t)\|_{\mathrm{L}^p}^p+\frac{\nu p(p-1)}{2}\int_0^t\||v_n(s)|^{\frac{p-2}{2}}\partial_xv^n(s)\|_{\mathrm{L}^2}^2ds+p\beta\gamma\int_0^t\|v^n(s)\|_{\mathrm{L}^p}^pds+\frac{p\beta}{2}\int_0^t\|v^n(s)\|_{\mathrm{L}^{2+p}}^{2+p}ds\nonumber\\&\leq\|u_0\|_{\mathrm{L}^p}^p+\left(\frac{p-1}{p}\right)^{p-1}\left(\beta(1+\gamma)\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^2+\beta(1+\gamma)\|\mathrm{W}_A^n(t)\|_{\mathrm{L}^{\infty}}\right)\nonumber\\&\quad +\left(\frac{p-2}{p}\right)^{\frac{p-2}{p}}\frac{\alpha^2(p-1)}{2\nu}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^4\nonumber\\&\quad+p\int_0^t\bigg(\frac{\alpha^2(p-1)}{\nu}\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^2+2\beta(1+\gamma)\|W_A^n(s)\|_{\mathrm{L}^{\infty}}+\beta(1+\gamma)^2+12\beta\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^2\nonumber\\&\qquad+\left(\beta(1+\gamma)\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^2+\beta(1+\gamma)\|\mathrm{W}_A^n(t)\|_{\mathrm{L}^{\infty}}\right)+\frac{\alpha^2(p-1)}{4\nu}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^4\bigg)\|v^n(s)\|_{\mathrm{L}^p}^pds. \end{align} An application of Gronwall's inequality in \eqref{3.47} gives \begin{align} &\|v^n(t)\|_{\mathrm{L}^p}^p+\frac{\nu p(p-1)}{2}\int_0^t\||v_n(s)|^{\frac{p-2}{2}}\partial_xv^n(s)\|_{\mathrm{L}^2}^2ds+p\beta\gamma\int_0^t\|v^n(s)\|_{\mathrm{L}^p}^pds+\frac{p\beta}{2}\int_0^t\|v^n(s)\|_{\mathrm{L}^{2+p}}^{2+p}ds\nonumber\\&\leq\left(\|u_0\|_{\mathrm{L}^p}^p+\left(\frac{p-1}{p}\right)^{p-1}\left(\beta(1+\gamma)\mu_{n,\infty}^2+\beta(1+\gamma)\mu_{n,\infty}\right)+\left(\frac{p-2}{p}\right)^{\frac{p-2}{p}}\frac{\alpha^2(p-1)}{2\nu}\mu_{n,\infty}^4\right)\nonumber\\&\quad\times\exp\bigg\{pt\bigg(\frac{\alpha^2(p-1)}{\nu}\mu_{n,\infty}^2+2\beta(1+\gamma)\mu_{n,\infty}+\beta(1+\gamma)^2+12\beta\mu_{n,\infty}^2\nonumber\\&\qquad+\left(\beta(1+\gamma)\mu_{n,\infty}^2+\beta(1+\gamma)\mu_{n,\infty}\right)+\frac{\alpha^2(p-1)}{4\nu}\mu_{n,\infty}^4\bigg)\bigg\}, \end{align} for all $t\in[0,T]$, where \begin{align}\label{355}\mu_{n,\infty}:=\sup_{t\in[0,T]}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}\leq\sup_{t\in[0,T]}\|W_A(t)\|_{\mathrm{L}^{\infty}}=:\mu_{\infty}.\end{align} Thus, we have $\sup\limits_n\sup\limits_{t\in[0,T]}\|v^n(t)\|_{\mathrm{L}^p}\leq C(\|u_0\|_{\mathrm{L}^p},\mu_{\infty},T,p,\alpha,\beta,\gamma,\nu).$ Hence, the inequality \eqref{3.32} follows. \end{proof} The following theorem can be immediately deduced from Theorem \ref{thm31} and Lemma \ref{lem32}. \begin{theorem} Let the $\mathscr{F}_0$-measurable initial data $u_0$ be given and $u_0\in\mathrm{L}^3(\mathcal{O})$, $\mathbb{P}$-a.s. Then there exists a unique mild solution of the equation \eqref{2.13} (with $\delta=1$), which belongs to $\mathrm{C}([0,T];\mathrm{L}^{3}(\mathcal{O}))$, $\mathbb{P}$-a.s. \end{theorem} \begin{remark} \begin{align}\label{3.55} &\|v(t)\|_{\mathrm{L}^{2\delta+1}}^{2\delta+1}+\nu\delta(2\delta+1)\int_0^t\||v_n(s)|^{\frac{2\delta-1}{2}}\partial_xv(s)\|_{\mathrm{L}^2}^2ds+(2\delta+1)\beta\gamma\int_0^t\|v(s)\|_{\mathrm{L}^{2\delta+1}}^{2\delta+1}ds\nonumber\\&\qquad+\frac{\beta}{4}\int_0^t\|v(s)\|_{\mathrm{L}^{4\delta+1}}^{4\delta+1}ds\nonumber\\&\leq\Bigg\{\|u_0\|_{\mathrm{L}^{2\delta+1}}^{2\delta+1}+t\left[\frac{\delta^2\alpha^22^{\delta+1}}{\nu}\left(\frac{2\delta-1}{2\delta+1}\right)^{\frac{2}{2\delta-1}}+\beta(1+\gamma)2^{\delta}\left(\frac{2\delta}{2\delta+1}\right)^{2\delta}\right]\mu_{\infty}^{(2\delta+1)(\delta+1)}\nonumber\\&\qquad+t\beta(1+\gamma)2^{\delta}\left(\frac{2\delta}{2\delta+1}\right)^{2\delta}\mu_{\infty}^{(2\delta+1)(\delta+1)}+t\beta(2\delta+1)2^{2\delta-1}\left(\frac{2\delta}{2\delta+1}\right)^{2\delta}\mu_{\infty}^{(2\delta+1)^2}\Bigg\}\nonumber\\&\quad\times\exp\Bigg\{\left[\frac{\delta^2\alpha^22^{\delta}}{\nu}+\beta(1+\gamma)^22^{2\delta}+\beta(1+\gamma)2^{\delta} +\beta\gamma+\beta(2\delta+1)2^{2\delta-1}\right]t\nonumber\\&\qquad+\beta(2\delta+1)^{2\delta}2^{2\delta(2\delta-1)}\mu_{\infty}^{2\delta}+\left(\frac{\delta^2\alpha^22^{\delta+1}}{2\nu}\right)^{\delta}\frac{1}{\delta}\left(\frac{4(\delta-1)}{\beta\delta}\right)^{\delta-1}\mu_{\infty}^{2\delta}\Bigg\}, \end{align} for all $t\in[0,T]$. For the generalized Burgers-Huxley equation \begin{align}\label{3.48} \frac{\partial v^n}{\partial t}=\nu\frac{\partial^2v^n}{\partial x^2}-\alpha (v^n+W_A^n)^{\delta}\frac{\partial }{\partial x}(v^n+W_A^n)+\beta (v^n+W_A^n)(1-(v^n+W_A^n)^{\delta})((v^n+W_A^n)^{\delta}-\gamma). \end{align} Multiplying \eqref{3.48} by $|v^n|^{p-2}v^n$, integrating over $(0,1)$ and then using Taylor's formula, we find \begin{align}\label{335} &\frac{1}{p}\frac{d}{d t}\|v^n(t)\|_{\mathrm{L}^{p}}^p+\nu(p-1)\||v_n(t)|^{\frac{p-2}{2}}\partial_xv^n(t)\|_{\mathrm{L}^2}^2\nonumber\\&=-\alpha((v^n(t)+W_A^n(t))\partial_x(v^n(t)+W_A^n(t))^{\delta},|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad +\beta(1+\gamma)((v^n(t)+W_A^n(t))^{\delta+1},|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad-\beta\gamma(v^n(t)+W_A^n(t),|v^n(t)|^{p-2}v^n(t))-\beta((v^n(t)+W_A^n(t))^{2\delta+1},|v^n(t)|^{p-2}v^n(t))\nonumber\\&=-\frac{\alpha}{\delta+1}(\partial_x(v^n(t)+W_A^n(t))^{\delta+1},|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad +\beta(1+\gamma)((v^n(t)+W_A^n(t))^{\delta+1},|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad-\beta\gamma\|v^n(t)\|_{\mathrm{L}^p}^p-\beta\gamma(W_A^n(t),|v^n(t)|^{p-2}v^n(t))-\beta\|v^n(t)\|_{\mathrm{L}^{2\delta+p}}^{2\delta+p}\nonumber\\&\quad-\beta(2\delta+1)(W_A^n(t)(\theta_2 v^n(t)+(1-\theta_2)W_A^n(t))^{2\delta},|v^n(t)|^{p-2}v^n(t)), \end{align} for $0<\theta_2<1$. It can be easily deduced from \eqref{335} that \begin{align}\label{3.51} &\frac{1}{p}\frac{d}{d t}\|v^n(t)\|_{\mathrm{L}^{p}}^p+\nu(p-1)\||v_n(t)|^{\frac{p-2}{2}}\partial_xv^n(t)\|_{\mathrm{L}^2}^2+\beta\gamma\|v^n(t)\|_{\mathrm{L}^p}^p+\beta\|v^n(t)\|_{\mathrm{L}^{2\delta+p}}^{2\delta+p}\nonumber\\&=-\frac{\alpha}{\delta+1}(\partial_x(v^n(t)+W_A^n(t))^{\delta+1},|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad +\beta(1+\gamma)((v^n(t)+W_A^n(t))^{\delta+1},|v^n(t)|^{p-2}v^n(t))-\beta\gamma(W_A^n(t),|v^n(t)|^{p-2}v^n(t))\nonumber\\&\quad-\beta(2\delta+1)(W_A^n(t)(\theta_2 v^n(t)+(1-\theta_2)W_A^n(t))^{2\delta},|v^n(t)|^{p-2}v^n(t))=:\sum_{j=1}^4J_j, \end{align} where $J_j$, $j=1,\ldots,4$ represents the terms appearing in the right hand side of the equality \eqref{3.51}. An integration by parts and Taylor's formula yields \begin{align}\label{3.52} &\frac{\alpha}{\delta+1}(\partial_x(v^n+W_A^n)^{\delta+1},|v^n|^{p-2}v^n)\nonumber\\&=-\frac{\alpha}{\delta+1}(p-1)((v^n+W_A^n)^{\delta+1},|v^n|^{p-2}\partial_xv^n)\nonumber\\&=-\frac{\alpha}{\delta+1}(p-1)((v^n)^{\delta+1},|v^n|^{p-2}\partial_xv^n)\nonumber\\&\quad -\frac{\alpha}{\delta+1}(p-1)(\delta+1)(W_A^n(\theta_1 v^n+(1-\theta_1)W_A^n)^{\delta},|v^n|^{p-2}\partial_xv^n)\nonumber\\&= -\alpha(p-1)(W_A^n(\theta_1 v^n+(1-\theta_1)W_A^n)^{\delta},|v^n|^{p-2}\partial_xv^n), \end{align} for $0<\theta_1<1$, where we used \eqref{7a}. The term on the right hand side of the equality can be estimated as \begin{align}\label{3.53} & \alpha(p-1)|(W_A^n(\theta_1 v^n+(1-\theta_1)W_A^n)^{\delta},|v^n|^{p-2}\partial_xv^n)|\nonumber\\&\leq\alpha(p-1)\||v^n|^{\frac{p-2}{2}}W_A^n(\theta_1 v^n+(1-\theta_1)W_A^n)^{\delta}\|_{\mathrm{L}^2}\||v^n|^{\frac{p-2}{2}}\partial_xv^n\|_{\mathrm{L}^2}\nonumber\\&\leq\frac{\nu}{2}\||v^n|^{\frac{p-2}{2}}\partial_xv^n\|_{\mathrm{L}^2}^2+\frac{(p-1)^2\alpha^2}{2\nu}\||v^n|^{\frac{p-2}{2}}W_A^n(\theta_1 v^n+(1-\theta_1)W_A^n)^{\delta}\|_{\mathrm{L}^2}^2\nonumber\\&\leq\frac{\nu}{2}\||v^n|^{\frac{p-2}{2}}\partial_xv^n\|_{\mathrm{L}^2}^2+\frac{(p-1)^2\alpha^22^{\delta-1}}{2\nu}\left(\|W_A^n\|_{\mathrm{L}^{\infty}}^2\|v^n\|_{\mathrm{L}^{p+2\delta-2}}^{p+2\delta-2}+\|W_A^n\|_{\mathrm{L}^{\infty}}^{2\delta+2}\|v^n\|_{\mathrm{L}^{p-2}}^{{p-2}}\right)\nonumber\\&\leq\frac{\nu}{2}\||v^n|^{\frac{p-2}{2}}\partial_xv^n\|_{\mathrm{L}^2}^2+\frac{(p-1)^2\alpha^22^{\delta-1}}{2\nu}\|W_A^n\|_{\mathrm{L}^{\infty}}^2\|v^n\|_{\mathrm{L}^{p+2\delta}}^{\frac{(\delta-1)(p+2\delta)}{\delta}}\|v^n\|_{\mathrm{L}^p}^{\frac{p}{\delta}}\nonumber\\&\quad+\frac{(p-1)^2\alpha^22^{\delta-1}}{2\nu}\|W_A^n\|_{\mathrm{L}^{\infty}}^{2\delta+2}\|v^n\|_{\mathrm{L}^p}^{p-2}\nonumber\\&\leq\frac{\nu}{2}\||v^n|^{\frac{p-2}{2}}\partial_xv^n\|_{\mathrm{L}^2}^2+\frac{\beta}{4}\|v^n\|_{\mathrm{L}^{p+2\delta}}^{p+2\delta}+\left(\frac{(p-1)^2\alpha^22^{(\delta-1)}}{2\nu}\right)^{\delta}\frac{1}{\delta}\left(\frac{4(\delta-1)}{\beta\delta}\right)^{\delta-1}\|W_A^n\|_{\mathrm{L}^{\infty}}^{2\delta}\|v^n\|_{\mathrm{L}^p}^p\nonumber\\&\quad+\frac{(p-1)^2\alpha^22^{\delta-2}}{\nu}\|v^n\|_{\mathrm{L}^p}^{p}+\frac{(p-1)^2\alpha^22^{\delta-1}}{p\nu}\left(\frac{p-2}{p}\right)^{\frac{2}{p-2}}\|W_A^n\|_{\mathrm{L}^{\infty}}^{p(\delta+1)}, \end{align} where we used H\"older's, interpolation and Young's inequalities. We estimate $J_2$ using H\"older's, interpolation and Young's inequalities as \begin{align}\label{3.60} |J_2|&\leq\beta(1+\gamma)2^{\delta}(|v^n|^{\delta+1}+|W_A^n|^{\delta+1},|v^n|^{p-1})\nonumber\\&\leq \beta(1+\gamma)2^{\delta}\|v^n\|_{\mathrm{L}^{\delta+p}}^{\delta+p}+\beta(1+\gamma)2^{\delta}\|W_A^n\|_{\mathrm{L}^{\infty}}^{\delta+1}\|v^n\|_{\mathrm{L}^{p-1}}^{p-1}\nonumber\\&\leq\beta(1+\gamma)2^{\delta}\|v^n\|_{\mathrm{L}^{2\delta+p}}^{\frac{2\delta+p}{2}}\|v^n\|_{\mathrm{L}^p}^{\frac{p}{2}}+\beta(1+\gamma)2^{\delta}\|W_A^n\|_{\mathrm{L}^{\infty}}^{\delta+1}\|v^n\|_{\mathrm{L}^{p}}^{p-1}\nonumber\\&\leq\frac{\beta}{4}\|v^n\|_{\mathrm{L}^{2\delta+p}}^{{2\delta+p}}+\beta(1+\gamma)^22^{2\delta}\|v^n\|_{\mathrm{L}^p}^p+\beta(1+\gamma)2^{\delta}\|v^n\|_{\mathrm{L}^{p}}^{p}\nonumber\\&\quad+\frac{\beta(1+\gamma)2^{\delta}}{p}\left(\frac{p-1}{p}\right)^{p-1}\|W_A^n\|_{\mathrm{L}^{\infty}}^{p(\delta+1)}. \end{align} Similarly, we estimate $J_3$ and $J_4$ as \begin{align}\label{3.61} |J_3|&\leq\beta\gamma\|W_A^n\|_{\mathrm{L}^{\infty}}\|v^n\|_{\mathrm{L}^{p-1}}^{p-1}\leq\beta\gamma\|W_A^n\|_{\mathrm{L}^{\infty}}\|v^n\|_{\mathrm{L}^{p}}^{p-1}\nonumber\\&\leq\beta\gamma\|v^n\|_{\mathrm{L}^p}^p+\frac{\beta\gamma}{p}\left(\frac{p-1}{p}\right)^{p-1}\|W_A^n\|_{\mathrm{L}^{\infty}}^p,\\ |J_4|&\leq\beta(2\delta+1)2^{2\delta-1}(|W_A^n||v^n|^{2\delta}+|W_A^n|^{2\delta+1},|v^n|^{p-1})\nonumber\\&\leq\beta(2\delta+1)2^{2\delta-1}\|W_A^n\|_{\mathrm{L}^{\infty}}\|v^n\|_{\mathrm{L}^{2\delta+p-1}}^{2\delta+p-1}+\beta(2\delta+1)2^{2\delta-1}\|W_A^n\|_{\mathrm{L}^{\infty}}^{2\delta+1}\|v^n\|_{\mathrm{L}^{p-1}}^{p-1}\nonumber\\&\leq\beta(2\delta+1)2^{2\delta-1}\|W_A^n\|_{\mathrm{L}^{\infty}}\|v^n\|_{\mathrm{L}^{2\delta+p}}^{\frac{(2\delta-1)(2\delta+p)}{2\delta}}\|v^n\|_{\mathrm{L}^p}^{\frac{p}{2\delta}}+\beta(2\delta+1)2^{2\delta-1}\|W_A^n\|_{\mathrm{L}^{\infty}}^{2\delta+1}\|v^n\|_{\mathrm{L}^{p}}^{p-1}\nonumber\\&\leq\frac{\beta}{4}\|v^n\|_{\mathrm{L}^{2\delta+p}}^{2\delta+p}+\beta(2\delta+1)^{2\delta}2^{2\delta(2\delta-1)}\|W_A^n\|_{\mathrm{L}^{\infty}}^{2\delta}\|v^n\|_{\mathrm{L}^p}^{p}+\beta(2\delta+1)2^{2\delta-1}\|v^n\|_{\mathrm{L}^{p}}^{p}\nonumber\\&\quad+\frac{\beta(2\delta+1)2^{2\delta-1}}{p}\left(\frac{p-1}{p}\right)^{p-1}\|W_A^n\|_{\mathrm{L}^{\infty}}^{p(2\delta+1)}.\label{3.62} \end{align} Combining \eqref{3.53}-\eqref{3.62} and substituting it in \eqref{3.51} yields \begin{align}\label{362} &\frac{1}{p}\frac{d}{d t}\|v^n(t)\|_{\mathrm{L}^{p}}^p+\frac{\nu(p-1)}{2}\||v_n(t)|^{\frac{p-2}{2}}\partial_xv^n(t)\|_{\mathrm{L}^2}^2+\beta\gamma\|v^n(t)\|_{\mathrm{L}^p}^p+\frac{\beta}{4}\|v^n(t)\|_{\mathrm{L}^{2\delta+p}}^{2\delta+p}\nonumber\\&\leq \bigg\{\frac{(p-1)^2\alpha^22^{\delta-2}}{\nu}+\beta(1+\gamma)^22^{2\delta}+\beta(1+\gamma)2^{\delta} +\beta\gamma+\beta(2\delta+1)^{2\delta}2^{2\delta(2\delta-1)}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^{2\delta}\nonumber\\&\qquad+\beta(2\delta+1)2^{2\delta-1}+\left(\frac{(p-1)^2\alpha^22^{(\delta-1)}}{2\nu}\right)^{\delta}\frac{1}{\delta}\left(\frac{4(\delta-1)}{\beta\delta}\right)^{\delta-1}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^{2\delta}\bigg\}\|v^n(t)\|_{\mathrm{L}^p}^{p}\nonumber\\&\quad+\left\{\frac{(p-1)^2\alpha^22^{\delta-1}}{p\nu}\left(\frac{p-2}{p}\right)^{\frac{2}{p-2}}+\frac{\beta(1+\gamma)2^{\delta}}{p}\left(\frac{p-1}{p}\right)^{p-1}\right\}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^{p(\delta+1)}\nonumber\\&\quad+\frac{\beta(1+\gamma)2^{\delta}}{p}\left(\frac{p-1}{p}\right)^{p-1}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^{p(\delta+1)}+\frac{\beta(2\delta+1)2^{2\delta-1}}{p}\left(\frac{p-1}{p}\right)^{p-1}\|W_A^n(t)\|_{\mathrm{L}^{\infty}}^{p(2\delta+1)}. \end{align} Integrating the above inequality from $0$ to $t$, we find \begin{align}\label{3.64} &\|v^n(t)\|_{\mathrm{L}^{p}}^p+\frac{\nu p(p-1)}{2}\int_0^t\||v_n(s)|^{\frac{p-2}{2}}\partial_xv^n(s)\|_{\mathrm{L}^2}^2ds+p\beta\gamma\int_0^t\|v^n(s)\|_{\mathrm{L}^p}^pds\nonumber\\&\qquad+\frac{\beta}{4}\int_0^t\|v^n(s)\|_{\mathrm{L}^{2\delta+p}}^{2\delta+p}ds\nonumber\\&\leq\|u_0\|_{\mathrm{L}^p}^p+p\int_0^t\bigg\{\frac{(p-1)^2\alpha^22^{\delta-2}}{\nu}+\beta(1+\gamma)^22^{2\delta}+\beta(1+\gamma)2^{\delta} +\beta\gamma+\beta(2\delta+1)2^{2\delta-1}\nonumber\\&\qquad+\beta(2\delta+1)^{2\delta}2^{2\delta(2\delta-1)}\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{2\delta}\nonumber\\&\qquad+\left(\frac{(p-1)^2\alpha^22^{(\delta-1)}}{2\nu}\right)^{\delta}\frac{1}{\delta}\left(\frac{4(\delta-1)}{\beta\delta}\right)^{\delta-1}\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{2\delta}\bigg\}\|v^n(s)\|_{\mathrm{L}^p}^{p}ds\nonumber\\&\quad+\left\{\frac{(p-1)^2\alpha^22^{\delta-1}}{\nu}\left(\frac{p-2}{p}\right)^{\frac{2}{p-2}}+\beta(1+\gamma)2^{\delta}\left(\frac{p-1}{p}\right)^{p-1}\right\}\int_0^t\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{p(\delta+1)}ds\nonumber\\&\quad+\beta(1+\gamma)2^{\delta}\left(\frac{p-1}{p}\right)^{p-1}\int_0^t\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{p(\delta+1)}ds+\beta(2\delta+1)2^{2\delta-1}\left(\frac{p-1}{p}\right)^{p-1}\int_0^t\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{p(2\delta+1)}ds. \end{align} An application of Gronwall's inequality in \eqref{3.64} gives \begin{align}\label{366} &\|v^n(t)\|_{\mathrm{L}^{p}}^p+\frac{\nu p(p-1)}{2}\int_0^t\||v_n(s)|^{\frac{p-2}{2}}\partial_xv^n(s)\|_{\mathrm{L}^2}^2ds+p\beta\gamma\int_0^t\|v^n(s)\|_{\mathrm{L}^p}^pds\nonumber\\&\qquad+\frac{\beta}{4}\int_0^t\|v^n(s)\|_{\mathrm{L}^{2\delta+p}}^{2\delta+p}ds\nonumber\\&\leq\Bigg\{\|u_0\|_{\mathrm{L}^p}^p+t\left[\frac{(p-1)^2\alpha^22^{\delta-1}}{\nu}\left(\frac{p-2}{p}\right)^{\frac{2}{p-2}}+\beta(1+\gamma)2^{\delta}\left(\frac{p-1}{p}\right)^{p-1}\right]\sup_{s\in[0,t]}\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{p(\delta+1)}\nonumber\\&\qquad+t\beta(1+\gamma)2^{\delta}\left(\frac{p-1}{p}\right)^{p-1}\sup_{t\in[0,t]}\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{p(\delta+1)}\nonumber\\&\qquad+t\beta(2\delta+1)2^{2\delta-1}\left(\frac{p-1}{p}\right)^{p-1}\sup_{s\in[0,t]}\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{p(2\delta+1)}\Bigg\}\nonumber\\&\quad\times\exp\Bigg\{\left[\frac{(p-1)^2\alpha^22^{\delta-2}}{\nu}+\beta(1+\gamma)^22^{2\delta}+\beta(1+\gamma)2^{\delta} +\beta\gamma+\beta(2\delta+1)2^{2\delta-1}\right]t\nonumber\\&\qquad+\beta(2\delta+1)^{2\delta}2^{2\delta(2\delta-1)}\sup_{s\in[0,t]}\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{2\delta}\nonumber\\&\qquad+\left(\frac{(p-1)^2\alpha^22^{(\delta-1)}}{2\nu}\right)^{\delta}\frac{1}{\delta}\left(\frac{4(\delta-1)}{\beta\delta}\right)^{\delta-1}\sup_{s\in[0,t]}\|W_A^n(s)\|_{\mathrm{L}^{\infty}}^{2\delta}\Bigg\}, \end{align} for all $t\in[0,T]$. Using \eqref{355} in \eqref{355} and taking $p=2\delta+1$, we finally obtain \eqref{3.55}. \end{remark} The following theorem is an immediate application of Theorem \ref{thm31} and Lemma \ref{}. \begin{theorem} Let the $\mathscr{F}_0$-measurable initial data $u_0$ be given and $u_0\in\mathrm{L}^{2\delta+1}(\mathcal{O})$, $\mathbb{P}$-a.s. Then there exists a unique mild solution of the equation \eqref{2.13}, which belongs to $\mathrm{C}([0,T];\mathrm{L}^{2\delta+1}(\mathcal{O}))$, $\mathbb{P}$-a.s. \end{theorem} \fi \section{Strong solution}\label{sec3}\setcounter{equation}{0} In this section, we prove the existence and uniqueness of strong solution to the system \eqref{abstract} by making use of local monotonicity results obtained in Theorem \ref{monotone} and stochastic generalization of localized version of the Minty-Browder technique. Let us first give the definition of a unique global strong solution to the system (\ref{abstract}). \begin{definition}[Global strong solution]\label{def3.1} Let $u_0\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^2(\mathcal{O})),$ $p>2$ be given. An $\mathrm{H}_0^1(\mathcal{O})$-valued $(\mathscr{F}_t)_{t\geq 0}$-adapted progressively measurable stochastic process $u(\cdot)$ is called a \emph{strong solution} to (\ref{abstract}), if the following conditions are satisfied: \begin{enumerate} \item [(i)] the process $$u\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^{\infty}(0,T;\mathrm{L}^2(\mathcal{O})))\cap\mathrm{L}^2(\Omega;\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O})))\cap\mathrm{L}^4(\Omega;\mathrm{L}^4(0,T;\mathrm{L}^4(\mathcal{O})))$$ is having continuous modification (still denoted by ${u}$) with ${u}\in\mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O}))$, $\mathbb{P}$-a.s., \item [(ii)] the following equality holds for every $t\in [0, T ]$, as an element of $\mathrm{H}^{-1}(\mathcal{O}),$ $\mathbb{P}$-a.s. \begin{align}\label{4.4} u(t)&=u_0+\int_0^t[- \nu Au(s)-\alpha B(u(s))+\beta c(u(s))]ds+\int_0^t\sigma(s,u(s))dW(s). \end{align} \end{enumerate} \end{definition} An alternative version of condition (\ref{4.4}) is to require that for any $v\in\mathrm{H}_0^1(\mathcal{O})$: \begin{align}\label{4.5} ( u(t),v)&=(u_0,v)+\int_0^t\langle- \nu Au(s)-\alpha B(u(s))+\beta c(u(s)),v\rangle ds+\int_0^t(\sigma(s,u(s))dW(s),v). \end{align} \begin{definition} A strong solution $u(\cdot)$ to (\ref{abstract}) is called a \emph{pathwise unique strong solution} if $\widetilde{u}(\cdot)$ is an another strong solution, then $$\mathbb{P}\Big\{\omega\in\Omega:u(t)=\widetilde{u}(t),\text{ for all } t\in[0,T]\Big\}=1.$$ \end{definition} \subsection{Energy estimates} Let us first show the energy estimates satisfied by the system \eqref{abstract}. Let the functions $w_k=w_k(x),$ $k=1,2,\ldots,$ be smooth, the set $\{w_k(x)\}_{k=1}^{\infty}$ be an orthogonal basis of $\mathrm{H}_0^1(\mathcal{O})$ and orthonormal basis of $\mathrm{H}=\mathrm{L}^2(\mathcal{O})$. One can take $\{w_k(x)\}_{k=1}^{\infty}$ as the complete set of normlized eigenfunctions of the operator $-\partial_{xx}$ in $\mathrm{H}_0^1(\mathcal{O})$. Let $\mathrm{H}_n$ be the $n$-dimensional subspace of $\mathrm{H}$. Let $P_n$ denote the orthogonal projection of $\mathrm{H}^{-1}(\mathcal{O})$ onto $\mathrm{H}_n$, that is, $P_nx=\sum\limits_{k=1}^n\langle x,w_k\rangle w_k$. Since every element $x\in\mathrm{H}$ induces a functional $x^*\in\mathrm{H}$ by the formula $\langle x^*,y\rangle=(x,y)$, $y\in\mathrm{H}_0^1(\mathcal{O})$, then $P_n\big|_{\mathrm{H}}$, the orthogonal projection of $\mathrm{H}$ onto $\mathrm{H}_n$ is given by $P_nx=\sum\limits_{k=1}^n(x,w_k)w_k$. Hence in particular, $P_n$ is the orthogonal projection from $\mathrm{H}$ onto $\text{span}\{w_1,\ldots,w_n\}$. We define $B_n(u_n)=P_nB(u_n)$, $c_n(u_n)=P_nc(u_n)$, $W_n(\cdot)=P_nW(\cdot)$, and $\sigma_n(\cdot,u_n)=P_n\sigma(\cdot,u_n)$. Let us now consider the following system of ODEs: \begin{equation}\label{4.7} \left\{ \begin{aligned} d(u_n(t),v)&=(-\nu Au_n(t)-\alpha B_n(u_n(t))+\beta c_n(u_n(t)),v) d t+(\sigma_n(t,u_n(t))d W_n(t),v),\\ u_n(0)&=u_0^n, \end{aligned} \right. \end{equation} with $u_0^n=P_nu_0,$ for all $v\in\mathrm{H}_n$. Since $\mathrm{B}_n(\cdot)$ and $c_n(\cdot)$ are locally Lipschitz (see \eqref{2.1} and \eqref{2p7}), and $\sigma_n(\cdot,\cdot)$ is globally Lipschitz, the system (\ref{4.7}) has a unique $\mathrm{H}_n$-valued local solution $u_n(\cdot)$ and $u_n\in\mathrm{L}^2(\Omega;\mathrm{L}^{\infty}(0,T^*;\mathrm{H}_n))$ with continuous sample paths. Let us now derive a-priori energy estimates satisfied by the system \eqref{4.7}. \begin{proposition}[Energy estimate]\label{prop1} Let $u_n(\cdot)$ be the unique solution of the system of stochastic ODE's (\ref{4.7}) with $u_0\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^2(\mathcal{O}))$, $p>2$. Then, we have \begin{align}\label{energy1} &\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{L}^2}^2+4\nu\int_0^{T}\|u_n(t)\|_{\mathrm{H}_0^1}^2d t+2\beta\int_0^{T}\|u_n(t)\|_{\mathrm{L}^4}^4dt\right]\nonumber\\&\quad \leq (2\|u_0\|_{\mathrm{L}^2}^2+14KT)e^{4(\beta(1+\gamma^2)+7K)T}.\\ &\mathbb{E}\bigg[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}+4p\nu\int_0^{T}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(t)\|_{\mathrm{H}_0^1}^2d t+2p\beta\int_0^{T}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(t)\|_{\mathrm{L}^4}^4dt\bigg]\nonumber\\&\quad \leq \left[2\|u_0\|_{\mathrm{L}^2}^{2p} +C(p,K,T)2^{p}T\right]e^{\left\{[2p\beta(1+\gamma^2)+C(p,K,T)2^{p}T]T\right\}},\label{energy2} \end{align} where $C(p,K,T)=(4(p-1))^{p-1}(14p-1)^pK^p$. \end{proposition} \begin{proof} Let us define a sequence of stopping times $\tau_N$ by \begin{align}\label{stopm} \tau_N^n:=\inf_{t\geq 0}\left\{t:\|u_n(t)\|_{\mathrm{L}^2}^2+\int_0^t\|u_n(s)\|_{\mathrm{H}_0^1}^2d s+\int_0^t\|u_n(s)\|_{\mathrm{L}^4}^4ds\geq N\right\}, \end{align} for $N\in\mathbb{N}$. Applying the finite dimensional It\^{o} formula to the process $\|u_n(\cdot)\|_{\mathrm{L}^2}^2$, we obtain (see Theorem 32, \cite{PEP}) \begin{align}\label{3.6} \|u_n(t\wedge\tau_N^n)\|_{\mathrm{L}^2}^2&= \|u_n(0)\|_{\mathrm{L}^2}^2+2\int_0^{t\wedge\tau_N^n}\langle-\nu Au_n(s)-\alpha B_n(u_n(s))+\beta c_n(u_n(s)),u_n(s)\rangle d s \nonumber\\&\quad+\int_0^{t\wedge\tau_N^n}\|\sigma_n(s,u_n(s))\|^2_{\mathcal{L}_{Q}}d s +2\int_0^{t\wedge\tau_N^n}\left(\sigma_n(s,u_n(s))dW_n(s),u_n(s)\right). \end{align} Note that $\langle\mathrm{B}_n(u_n),u_n\rangle=\langle\mathrm{B}(u_n),u_n\rangle=0$, using \eqref{6}. Using \eqref{7}, we estimate $(c_n(u_n),u_n)$ as \begin{align}\label{3p7} (c(u_n),u_n)&\leq\frac{(1+\gamma^2)}{2}\|u_n\|_{\mathrm{L}^2}^2-\frac{1}{2}\|u_n\|_{\mathrm{L}^4}^4. \end{align} Let us use \eqref{3p7} in \eqref{3.6} and then take expectation to get \begin{align}\label{3p8} &\mathbb{E}\left[\|u_n(t\wedge\tau_N^n)\|_{\mathrm{L}^2}^2+2\nu\int_0^{t\wedge\tau_N^n}\|u_n(s)\|_{\mathrm{H}_0^1}^2d s+\beta\int_0^{t\wedge\tau_N^n}\|u_n(s)\|_{\mathrm{L}^4}^4ds\right]\nonumber\\&\leq \|u_0\|_{\mathrm{L}^2}^2+\beta(1+\gamma^2)\mathbb{E}\left[\int_0^{t\wedge\tau_N^n}\|u_n(s)\|_{\mathrm{L}^2}^2ds\right] +\mathbb{E}\left[\int_0^{t\wedge\tau_N^n}\|\sigma_n(s,u_n(s))\|^2_{\mathcal{L}_{Q}}d s \right], \end{align} where we used $\|u_n(0)\|_{\mathrm{L}^2}\leq \|u_0\|_{\mathrm{L}^2}$ and the fact that final term in the right hand side of (\ref{3.6}) is a local martingale. Using the Hypothesis \ref{hyp} (H.2) in \eqref{3p8}, we obtain \begin{align}\label{3p9} &\mathbb{E}\left[\|u_n(t\wedge\tau_N^n)\|_{\mathrm{L}^2}^2+2\nu\int_0^{t\wedge\tau_N^n}\|u_n(s)\|_{\mathrm{H}_0^1}^2d s+\beta\int_0^{t\wedge\tau_N^n}\|u_n(s)\|_{\mathrm{L}^4}^4ds\right]\nonumber\\&\leq \|u_0\|_{\mathrm{L}^2}^2+KT+\left[\beta(1+\gamma^2)+K\right]\mathbb{E}\left[\int_0^{t}\chi_{[0,t\wedge\tau_N^n)}(s)\|u_n(s)\|_{\mathrm{L}^2}^2ds\right], \end{align} An application of Gronwall's inequality in (\ref{3p9}) yields \begin{align}\label{3p10} &\mathbb{E}\left[\|u_n(t\wedge\tau_N^n)\|_{\mathrm{L}^2}^2\right]\leq \left(\|u_0\|_{\mathrm{L}^2}^2+KT\right)e^{[\beta(1+\gamma^2)+K]T}, \end{align} for all $t\in[0,T]$. It can be shown that \begin{align}\label{3p11} \lim_{N\to\infty}\mathbb{P}\Big\{\omega\in\Omega:\tau_N(\omega)<t\Big\}=0, \ \textrm{ for all }\ t\in [0,T], \end{align} and $t\wedge\tau_N^n\to t$ as $N\to\infty$. On taking limit $N\to\infty$ in (\ref{3p10}) and using the \emph{monotone convergence theorem}, we get \begin{align}\label{3p12} \sup_{t\in[0,T]} \mathbb{E}\left[\|u_n(t)\|_{\mathrm{L}^2}^2\right]\leq \left(\|u_0\|_{\mathrm{L}^2}^2+KT\right)e^{[\beta(1+\gamma^2)+K]T}. \end{align} Substituting (\ref{3p12}) in (\ref{3p9}), we finally arrive at \begin{align}\label{4.16a} &\mathbb{E}\left[\|u_n(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^{t}\|u_n(s)\|_{\mathrm{H}_0^1}^2d s+\beta\int_0^{t}\|u_n(s)\|_{\mathrm{L}^4}^4ds\right]\leq \left(\|u_0\|_{\mathrm{L}^2}^2+KT\right)e^{2[\beta(1+\gamma^2)+K]T}, \end{align} for $t\in[0,T]$. Note that the right hand side of the inequality \eqref{4.16a} is independent of $n$. Let us take supremum from $0$ to $T\wedge\tau_N^n$ before taking expectation in (\ref{3p8}) to obtain \begin{align}\label{4.17} &\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{H}_0^1}^2d t+\beta\int_0^{T\wedge\tau_N^n}\|u_(t)\|_{\mathrm{L}^4}^4dt\right]\nonumber\\&\leq \|u_0\|_{\mathrm{L}^2}^2+\beta(1+\gamma^2)\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^2dt\right] +\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,u_n(t))\|^2_{\mathcal{L}_{Q}}d t\right] \nonumber\\&\quad +2\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\left|\int_0^{t}\left(\sigma_n(s,u_n(s))dW_n(s),u_n(s)\right)\right|\right]. \end{align} Let us take the final term from the right hand side of the inequality (\ref{4.17}) and use Burkholder-Davis-Gundy (see Theorem 1, \cite{BD} and Theorem 1.1, \cite{DLB}), H\"{o}lder's and Young's inequalities to get \begin{align}\label{4.18} &2\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\left|\int_0^{t}\left(\sigma_n(s,u_n(s))dW_n(s),u_n(s)\right)\right|\right]\nonumber\\&\leq 2\sqrt{3}\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,u_n(t))\|_{\mathcal{L}_{Q}}^2\|u_n(t)\|_{\mathrm{L}^2}^2d t\right]^{1/2}\nonumber\\&\leq 2 \sqrt{3}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}\left(\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,u_n(t))\|_{\mathcal{L}_{Q}}^2d t\right)^{1/2}\right]\nonumber\\&\leq \frac{1}{2} \mathbb{E}\Bigg[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^2\Bigg]+6\mathbb{E}\Bigg[\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,u_n(t))\|^2_{\mathcal{L}_{Q}}d t\Bigg]. \end{align} Substituting (\ref{4.18}) in (\ref{4.17}), we find \begin{align}\label{4.20} &\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^2+4\nu\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{H}_0^1}^2d t+2\beta\int_0^{T\wedge\tau_N^n}\|u_(t)\|_{\mathrm{L}^4}^4dt\right]\nonumber\\&\leq 2\|u_0\|_{\mathrm{L}^2}^2+14KT+(2\beta(1+\gamma^2)+14K)\mathbb{E}\left[\int_0^T\chi_{[0,T\wedge\tau_N^n)}(t)\|u_n(t)\|_{\mathrm{L}^2}^2d t\right], \end{align} where we used the Hypothesis \ref{hyp} (H.2). Applying Gronwall's inequality in (\ref{4.20}), we obtain \begin{align}\label{4.21} \mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^2\right]\leq (2\|u_0\|_{\mathrm{L}^2}^2+14KT)e^{(2\beta(1+\gamma^2)+14K)T}. \end{align} Passing $N\to\infty$, using the monotone convergence theorem and then substituting (\ref{4.21}) in (\ref{4.20}), we finally obtain the energy estimate in (\ref{energy1}). In order to prove the estimate \eqref{energy2}, we apply the finite dimensional It\^o's formula to the process $\|u_n(\cdot)\|_{\mathrm{L}^2}^{2p}$ to obtain \begin{align}\label{4.23a} &\|u_n(t)\|_{\mathrm{L}^2}^{2p}+2p\nu\int_0^{t}\|u_n(s)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(s)\|_{\mathrm{H}_0^1}^2d s\nonumber\\&= \|u_n(0)\|_{\mathrm{L}^2}^{2p}+2p\int_0^{t}\|u_n(s)\|_{\mathrm{L}^2}^{2(p-1)}\langle(-\alpha B_n(u_n(s))+\beta c_n(u_n(s)),u_n(s)\rangle d s \nonumber\\&\quad +2p\int_0^{t}\|u_n(s)\|_{\mathrm{L}^2}^{2(p-1)}\left(\sigma_n(s,u_n(s))dW_n(s),u_n(s)\right) \nonumber\\&\quad +p(2p-1)\int_0^t\|u_n(s)\|^{2(p-1)}_{\mathrm{L}^2}\textrm{Tr}(\sigma(s,u_n(s)) Q\sigma(s,u_n(s)))d s. \end{align} We take supremum over time from $0$ to $T\wedge\tau_N^n$, take expectation in (\ref{4.23a}) to obtain \begin{align}\label{4.24a} &\mathbb{E}\bigg[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}+2p\nu\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(t)\|_{\mathrm{H}_0^1}^2d t+2p\beta\gamma \int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2p}dt\nonumber\\&\qquad+2p\beta\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(t)\|_{\mathrm{L}^4}^4dt\bigg]\nonumber\\&\leq \mathbb{E}\left[\|u_0\|_{\mathrm{L}^2}^{2p}\right]+2p\beta(1+\gamma)\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}(u_n^2(t),u_n(t))d t\right] \nonumber\\&\quad +2p\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\left|\int_0^{t}\|u_n(s)\|_{\mathrm{L}^2}^{2(p-1)}\left(\sigma_n(s,u_n(s))dW_n(s),u_n(s)\right)\right|\right] \nonumber\\&\quad+p(2p-1)\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|\sigma_n(t,u_n(t))\|^2_{\mathcal{L}_{Q}}d t\right]\nonumber\\&=\sum_{i=1}^3J_i, \end{align} where $J_i$, for $i=1,2,3$ are the final three terms appearing the right hand side of the inequality \eqref{4.24a}. Let us use H\"older's and Young's inequalities to estimate $J_1$ as \begin{align}\label{3.54z} J_1&\leq 2p\beta(1+\gamma)\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(t)\|_{\mathrm{L}^4}^2\|u_n(t)\|_{\mathrm{L}^2}d t\right]\nonumber\\&\leq 2p\beta(1+\gamma)\mathbb{E}\left[\left(\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(t)\|_{\mathrm{L}^4}^4dt\right)^{1/2}\left(\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2p}dt\right)^{1/2}\right]\nonumber\\&\leq p\beta\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(t)\|_{\mathrm{L}^4}^4dt\right]+p\beta(1+\gamma)^2\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2p}dt\right]. \end{align} \iffalse Using the Cauchy-Schwarz inequality, H\"older inequality and (\ref{2.15}), we get \begin{align}\label{4.25z} I_3&\leq\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|\mathbf{u}_n(t)\|_{\mathrm{L}^2}^2\int_0^{T\wedge\tau_N^n}\|\beta*\nabla\mathbf{u}_n)(t)\|_{\mathrm{L}^2}\|\nabla\mathbf{u}_n(t))\|_{\mathrm{L}^2}d t \right]\nonumber\\&\leq \left(\int_0^T\beta(t)d t\right) \mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|\mathbf{u}_n(t)\|_{\mathrm{L}^2}^2\left(\int_0^{T\wedge\tau_N^n}\|\mathbf{u}_n(t))\|^2_{\mathbb{V}}d t\right) \right]\nonumber\\&\leq \frac{1}{64}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|\mathbf{u}_n(t)\|_{\mathrm{L}^2}^4\right]+\frac{16\gamma^2}{\mu^2\delta^2}\underbrace{\mathbb{E}\left[\left(\mu\int_0^{T\wedge\tau_N^n}\|\mathbf{u}_n(t))\|^2_{\mathbb{V}}d t\right)^2\right]}_{I_6}. \end{align} From (\ref{4.9}), it is clear that \begin{align}\label{4.26z} I_6&\leq \mathbb{E}\Bigg[\bigg(\|\mathbf{u}_0\|_{\mathrm{L}^2}^2+\frac{1}{\mu}\int_0^T\|\mathbf{f}(t)\|_{\mathbb{V}'}^2d t+\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,\mathbf{u}_n(t))\|^2_{\mathcal{L}_{Q}}\d t \nonumber\\&\quad+2\int_0^{T\wedge\tau_N^n}\left(\sigma_n(t,\mathbf{u}_n(t))dW_n(t),\mathbf{u}_n(t)\right) \bigg)^2\Bigg]\nonumber\\&\leq 4\Bigg[\|\mathbf{u}_0\|_{\mathrm{L}^2}^4+\frac{T}{\mu^2}\int_0^T\|\mathbf{f}(t)\|_{\mathbb{V}'}^4d t +\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,\mathbf{u}_n(t))\|^2_{\mathcal{L}_{Q}}\d t\right]^2 \nonumber\\&\quad+4\underbrace{\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\left(\sigma_n(t,\mathbf{u}_n(t))dW_n(t),\mathbf{u}_n(t)\right)\right]^2}_{I_6}\Bigg], \end{align} where we used the fact that $\left(\sum_{i=1}^na_i\right)^2\leq n\sum_{i=1}^na_i^2$. Using the It\^o isometry, H\"older inequality and Young's inequality, we estimate $I_6$ as \begin{align}\label{4.27z} I_6&\leq \mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,\mathbf{u}_n(t))\|_{\mathcal{L}_{Q}}^2\|\mathbf{u}_n(t)\|_{\mathrm{L}^2}^2d t\right]\nonumber\\&\leq \frac{\mu^2\delta^2}{2^{14}\gamma^2}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|\mathbf{u}_n(t)\|_{\mathrm{L}^2}^4\right]+\frac{2^{12}\gamma^2}{\mu^2\delta^2}\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,\mathbf{u}_n(t))\|_{\mathcal{L}_{Q}}^2d t\right]^2. \end{align} Substituting (\ref{4.27z}) in (\ref{4.26z}), we obtain \begin{align}\label{4.29z} I_6&\leq 4\|\mathbf{u}_0\|_{\mathrm{L}^2}^4+\frac{4T}{\mu^2}\int_0^T\|\mathbf{f}(t)\|_{\mathbb{V}'}^4d t+\frac{\mu^2\delta^2}{2^{10}\gamma^2}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|\mathbf{u}_n(t)\|_{\mathrm{L}^2}^4\right]\nonumber\\&\quad+4\left(1+\frac{2^{12}\gamma^2}{\mu^2\delta^2}\right)\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,\mathbf{u}_n(t))\|_{\mathcal{L}_{Q}}^2d t\right]^2. \end{align} Using (\ref{4.29z}) in (\ref{4.25z}), we also have \begin{align}\label{4.30z} I_3&\leq \frac{1}{32}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|\mathbf{u}_n(t)\|_{\mathrm{L}^2}^4\right] +\frac{2^6\gamma^2}{\mu^2\delta^2}\|\mathbf{u}_0\|_{\mathrm{L}^2}^4+\frac{2^6T\gamma^2}{\mu^4\delta^2}\int_0^T\|\mathbf{f}(t)\|_{\mathbb{V}'}^4d t\nonumber\\&\quad+\frac{2^6\gamma^2}{\mu^2\delta^2}\left(1+\frac{2^{12}\gamma^2}{\mu^2\delta^2}\right)\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,\mathbf{u}_n(t))\|_{\mathcal{L}_{Q}}^2d t\right]^2. \end{align} \fi Using Burkholder-Davis-Gundy, H\"older's and Young's inequalities, we estimate $J_2$ as \begin{align}\label{4.31z} J_2&\leq 2p\sqrt{3}\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{4p-2}\|\sigma_n(t,u_n(t))\|_{\mathcal{L}_{Q}}^2d t\right]^{1/2}\nonumber\\&\leq 2p\sqrt{3}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{p}\left(\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|\sigma_n(t,u_n(t))\|_{\mathcal{L}_{Q}}^2d t\right)^{1/2}\right]\nonumber\\&\leq \frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}\right]+12p^2\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|\sigma_n(t,u_n(t))\|_{\mathcal{L}_{Q}}^2d t\right]\nonumber\\&\leq\frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}\right]+J_4. \end{align} Let us use H\"older's and Young's inequalities to estimate $J_3+J_4$ as \begin{align}\label{4.32z} J_3+J_4&\leq p(14p-1)\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,u_n(t))\|_{\mathcal{L}_{Q}}^2d t\right]\nonumber\\&\leq\frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}\right]\nonumber\\&\qquad+\frac{1}{p}\left(\frac{4(p-1)}{p}\right)^{p-1}(p(14p-1))^{p}\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|\sigma_n(t,u_n(t))\|_{\mathcal{L}_{Q}}^2d t\right]^p\nonumber\\&\leq\frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}\right]+(4(p-1))^{p-1}(14p-1)^pK^p\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}(1+\|u_n(s)\|_{\mathrm{L}^2}^2)ds\right]^p\nonumber\\&\leq \frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}\right]+C(p,K,T)\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}(1+\|u_n(s)\|_{\mathrm{L}^2}^2)^pds\right]\nonumber\\&\leq \frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}\right]+C(p,K,T)2^{p-1}T\nonumber\\&\quad+C(p,K,T)2^{p-1}\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(s)\|_{\mathrm{L}^2}^{2p}ds\right], \end{align} where $C(p,K,T)=(4(p-1))^{p-1}(14p-1)^pK^p$. Combining (\ref{3.54z})-(\ref{4.32z}) and using it in (\ref{4.24a}), we arrive at \begin{align}\label{4.35z} &\mathbb{E}\bigg[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}+4p\nu\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(t)\|_{\mathrm{H}_0^1}^2d t\nonumber\\&\qquad+2p\beta\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2(p-1)}\|u_n(t)\|_{\mathrm{L}^4}^4dt\bigg]\nonumber\\&\leq 2\|u_0\|_{\mathrm{L}^2}^{2p} +C(p,K,T)2^{p}T +[2p\beta(1+\gamma^2)+C(p,K,T)2^{p}T]\mathbb{E}\left[\int_0^{T\wedge\tau_N^n}\|u_n(t)\|_{\mathrm{L}^2}^{2p}dt\right]. \end{align} Let us apply Gronwall's inequality in (\ref{4.35z}) to get \begin{align}\label{4.36z} &\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_N^n]}\|u_n(t)\|_{\mathrm{L}^2}^{2p}\right]\leq \left[2\|u_0\|_{\mathrm{L}^2}^{2p} +C(p,K,T)2^{p}T\right]e^{\left\{[2p\beta(1+\gamma^2)+C(p,K,T)2^{p}T]T\right\}}. \end{align} Passing $N\to\infty$, using the monotone convergence theorem in (\ref{4.36z}) and then applying it in (\ref{4.35z}), we arrive at (\ref{energy2}). \end{proof} \begin{remark} From \eqref{3.6}, using Hypothesis \ref{hyp} (H.2) and It\^o isometry, we have \begin{align}\label{3.26} &\mathbb{E}\left[\left(\int_0^{T}\|u_n(s)\|_{\mathrm{H}_0^1}^2ds\right)^2\right]\nonumber\\&\leq\mathbb{E}\bigg[ \bigg(\|u_0\|_{\mathrm{L}^2}^2+\beta(1+\gamma^2)\int_0^{T}\|u_n(s)\|_{\mathrm{L}^2}^2ds+\int_0^{T}\|\sigma_n(t,u_n(t))\|^2_{\mathcal{L}_{Q}}dt \nonumber\\&\qquad+2\int_0^{T}\left(\sigma_n(t,u_n(t))dW_n(t),u_n(t)\right)\bigg)^2\bigg]\nonumber\\&\leq 4\mathbb{E}\left[\|u_0\|_{\mathrm{L}^2}^4\right]+8KT+4[\beta(1+\gamma^2)T+2KT]\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(s)\|_{\mathrm{L}^2}^4\right]\nonumber\\&\quad +16\mathbb{E}\left[\int_0^T\|\sigma_n(s,u_n(s))\|_{\mathcal{L}(Q)}^2\|u_n(s)\|_{\mathrm{L}^2}^2ds\right]\nonumber\\&\leq 4\mathbb{E}\left[\|u_0\|_{\mathrm{L}^2}^4\right]+8KT+16KT\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(s)\|_{\mathrm{L}^2}^2\right]\nonumber\\&\quad+4[\beta(1+\gamma^2)T+18KT]\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(s)\|_{\mathrm{L}^2}^4\right]<+\infty, \end{align} by using \eqref{energy2}. Similarly, we have \begin{align}\label{3p27} \mathbb{E}\left[\left(\int_0^{T}\|u_n(t)\|_{\mathrm{L}^4}^4dt\right)^2\right]&\leq 4\mathbb{E}\left[\|u_0\|_{\mathrm{L}^2}^4\right]+8KT+16KT\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{L}^2}^2\right]\nonumber\\&\quad+4[\beta(1+\gamma^2)T+18KT]\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{L}^2}^4\right]<+\infty. \end{align} \end{remark} \subsection{Existence and uniqueness of strong solution} Let us now prove that the system (\ref{abstract}) has a unique global strong solution by exploiting the local monotonicity property (see (\ref{3.11y})) and a stochastic generalization of the Minty-Browder technique. This method is applied in \cite{ICAM} for establishing the existence of strong solutions to stochastic 2D hydrodynamical type systems. Similar existence results for the 2D stochastic Navier-Stokes equations driven by Gaussian noise can be found in \cite{MJSS,SSSP} and stochastic 2D Oldroyd model for viscoelastic models can be found in \cite{MTM3}. \begin{theorem}[Existence and uniqueness of strong solution to the system (\ref{abstract})]\label{exis} Let $u_0\in \mathrm{L}^{2p}(\Omega;\mathrm{L}^2(\mathcal{O})),$ for $p>2$ be given. Then there exists a \emph{unique strong solution} $u(\cdot)$ to the problem (\ref{abstract}) such that $$u\in \mathrm{L}^{2p}(\Omega;\mathrm{L}^{\infty}(0,T;\mathrm{L}^2(\mathcal{O})))\cap\mathrm{L}^2(\Omega;\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O})))\cap\mathrm{L}^4(\Omega;\mathrm{L}^4(0,T;\mathrm{L}^4(\mathcal{O})))$$ and $u(\cdot)$ is having a $\mathbb{P}$-a.s., continuous modification in $\mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O}))$. \end{theorem} \begin{proof} The proof of the solvability results of the system (\ref{abstract}) is divided into the following steps. \vskip 0.2cm \noindent\textbf{Step (1):} \emph{Finite-dimensional (Galerkin) approximation of the system (\ref{abstract}):} Let us first consider the following It\^{o} stochastic differential equation satisfied by $\{u_n(\cdot)\}$: \begin{equation}\label{4.37} \left\{ \begin{aligned} du_n(t)&=-F(u_n(t))d t+\sigma_n(t,u_n(t))dW_n(t),\\ u_n(0)&=u_0^n, \end{aligned} \right. \end{equation} where $F(u_n)=\nu A u_n+\alpha B_n(u_n)-\beta c_n(u_n)$. Applying It\^o's formula to the process $e^{-r(t)}\|u_n(\cdot)\|_{\mathrm{L}^2}^2$, we have the following equality: \begin{align}\label{4.38} e^{-r(t)}\|u_n(t)\|_{\mathrm{L}^2}^2&=e^{-r(0)}\|u_n(0)\|_{\mathrm{L}^2}^2-\int_0^te^{-r(s)}\langle2F(u_n(s))+r'(s)u_n(s),u_n(s)\rangle d s\\&\quad+2\int_0^te^{-r(s)}\left(\sigma_n(s,u_n(s))dW_n(s),u_n(s)\right) +\int_0^te^{-r(s)}\|\sigma_n(s,u_n(s))\|_{\mathcal{L}_{Q}}^2d s,\nonumber \end{align} for all $t\in[0,T]$. The quantity $r(t)$ appearing in \eqref{4.38} will be chosen later. Note that the third term from the right hand side of the equality (\ref{4.38}) is a martingale and on taking expectation, we get \begin{align}\label{4.39} \mathbb{E}\left[e^{-r(t)}\|u_n(t)\|_{\mathrm{L}^2}^2\right]&=\mathbb{E}\left[e^{-r(0)}\|u_n(0)\|_{\mathrm{L}^2}^2\right]-\mathbb{E}\left[\int_0^te^{-r(s)}\langle 2F(u_n(s))+r'(s)u_n(s),u_n(s)\rangle d s\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^t e^{-r(s)}\|\sigma_n(s,u_n(s))\|_{\mathcal{L}_{Q}}^2d s\right], \end{align} for all $t\in[0,T]$. \vskip 0.2cm \noindent\textbf{Step (2):} \emph{Weak convergence of the sequences $u_n(\cdot)$, $F(u_n(\cdot))$ and $\sigma_n(\cdot,\cdot)$.} We know that $\mathrm{L}^2\left(\Omega;\mathrm{L}^{\infty}(0,T;\mathrm{L}^2(\mathcal{O}))\right)\cong \left(\mathrm{L}^{2}\left(\Omega;\mathrm{L}^1(0,T;\mathrm{L}^2(\mathcal{O}))\right)\right)'$, and $\mathrm{L}^{2}\left(\Omega;\mathrm{L}^1(0,T;\mathrm{L}^2(\mathcal{O}))\right)$ is separable and the spaces $\mathrm{L}^2(\Omega;\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O})))$ and $\mathrm{L}^4(\Omega;\mathrm{L}^4(0,T;\mathrm{L}^4(\mathcal{O})))$ are reflexive. Using the energy estimates in Proposition \ref{prop1}, and Banach-Alaoglu theorem, we can extract a subsequence $\{u_{n_k}\}$ of $\{u_n\}$, which converges to the following limits (for simplicity, we denote the index $n_k$ by $n$): \begin{equation}\label{4.40} \left\{ \begin{aligned} u_n&\xrightarrow{w^{*}} u\textrm{ in }\mathrm{L}^2(\Omega;\mathrm{L}^{\infty}(0,T ;\mathrm{L}^2(\mathcal{O}))),\\ u_n&\xrightarrow{w} u\textrm{ in }\mathrm{L}^4(\Omega;\mathrm{L}^{4}(0,T ;\mathrm{L}^4(\mathcal{O}))),\\ u_n&\xrightarrow{w} u\textrm{ in }\mathrm{L}^2(\Omega;\mathrm{L}^{2}(0,T ;\mathrm{H}_0^1(\mathcal{O}))),\\ u_n(T)&\xrightarrow{w}\eta \in\mathrm{L}^2(\Omega;\mathrm{L}^2(\mathcal{O})),\\ F(u_n)&\xrightarrow{w} F_0\textrm{ in }\mathrm{L}^{1+\eta}(\Omega;\mathrm{L}^2(0,T ;\mathrm{H}^{-1}(\mathcal{O}))), \end{aligned} \right. \end{equation} for some $0<\eta\leq \frac{p-2}{p+2}$. The final convergence in (\ref{4.5}) can be justified as follows: \begin{align}\label{4.41} &\mathbb{E}\left[\int_0^T\left\|F(u_n(t))\right\|_{\mathrm{H}^{-1}}^{1+\eta}d t\right]\nonumber\\&\leq C(\eta)\left\{ \nu\mathbb{E}\left[\int_0^T\|Au_n(t)\|_{\mathrm{H}^{-1}}^{1+\eta}d t\right]+\alpha\mathbb{E}\left[\int_0^T\|B_n(u_n(t))\|_{\mathrm{H}^{-1}}^{1+\eta}d t\right]+\beta\mathbb{E}\left[\int_0^T\|c(u_n(t))\|_{\mathrm{H}^{-1}}^{1+\eta}d t\right]\right\}\nonumber\\&\leq C(\eta)\bigg\{ \nu\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{H}_0^1}^{1+\eta}dt\right]+\left[\alpha+\beta(1+\gamma)\right]\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^4}^{2(1+\eta)}dt\right]\nonumber\\&\qquad+\beta\gamma\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^2}^{1+\eta}dt\right]+\beta\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^6}^{3(1+\eta)}dt\right]\bigg\}\nonumber\\&\leq C(\eta)\bigg\{ T^{\frac{1-\eta}{2}}\left\{\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{H}_0^1}^2d t\right]\right\}^{\frac{1+\eta}{2}}+[\alpha+\beta(1+\gamma)]T^{\frac{1-\eta}{2}}\left\{\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^4}^4dt\right]\right\}^{\frac{1+\eta}{2}}\nonumber\\&\quad+\beta\gamma T^{\frac{1-\eta}{2}}\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^2}^2dt\right]^{\frac{1+\eta}{2}}+\beta\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^6}^{3(1+\eta)}dt\right]\bigg\}, \end{align} where we used \eqref{2p5} and H\"older's inequality. The final term from \eqref{4.41} can be controlled by Gagliardo-Nirenberg interpolation and H\"older's inequalities as \begin{align} &\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^6}^{3(1+\eta)}dt\right]\nonumber\\&\leq C\mathbb{E}\left[\int_0^T\|\partial_xu_n(t)\|_{\mathrm{L}^2}^{(1+\eta)}\|u_n(t)\|_{\mathrm{L}^2}^{2(1+\eta)}dt\right]\nonumber\\&\leq C\mathbb{E}\left[\left(\int_0^T\|u_n(t)\|_{\mathrm{H}_0^1}^2dt\right)^{\frac{1+\eta}{2}}\left(\int_0^T\|u_n(t)\|_{\mathrm{L}^2}^{\frac{4(1+\eta)}{1-\eta}}dt\right)^{\frac{1-\eta}{2}}\right]\nonumber\\&\leq CT^{\frac{1-\eta}{2}}\left\{\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{H}_0^1}^2dt\right]\right\}^{\frac{1+\eta}{2}}\left\{\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{L}^2}^{\frac{4(1+\eta)}{1-\eta}}\right]\right\}^{\frac{1-\eta}{2}}<+\infty, \end{align} using Proposition \eqref{prop1} (see \eqref{energy1} and \eqref{energy2}). Using the Hypothesis \ref{hyp} (H.1) and energy estimates given in Proposition \ref{prop1}, we have \begin{align}\label{4.42} \mathbb{E}\left[\int_0^{T }\|\sigma_n(t,u_n(t))\|_{\mathcal{L}_{Q}}^2\d t\right]&\leq K\mathbb{E}\left[\int_0^T\left(1+\|u_n(t)\|_{\mathrm{L}^2}^2\right)d t\right]\nonumber\\&\leq KT(2\|u_0\|_{\mathrm{L}^2}^2+14KT)e^{4(\beta(1+\gamma^2)+7K)T}<+\infty. \end{align} Thus, we can extract a subsequence $\{\sigma_{n_k}(\cdot,u_{n_k})\}$ which converge to the following limit (denoting the index $n_k$ by $n$): \begin{equation}\label{4.43z} \sigma_n(\cdot,u_n)P_n\xrightarrow{w} \Phi(\cdot)\textrm{ in }\mathrm{L}^2(\Omega;\mathrm{L}^2(0,T ;\mathcal{L}_{Q}(\mathrm{L}^2(\mathcal{O})))). \end{equation} As discussed in Theorem 7.5 \cite{chow}, one can prove that $u(\cdot)$ satisfies the It\^{o} stochastic differential: \begin{equation}\label{4.44} \left\{ \begin{aligned} du(t)&=-F_0(t)d t+\Phi(t)dW(t),\\ u(0)&=u_0. \end{aligned} \right. \end{equation} A calculation similar to (\ref{4.39}) yields \begin{align}\label{4.45} \mathbb{E}\left[e^{-r(t)}\|u(t)\|_{\mathrm{L}^2}^2\right]&=\mathbb{E}\left[e^{-r(0)}\|u_0\|_{\mathrm{L}^2}^2\right]-\mathbb{E}\left[\int_0^te^{-r(s)}\langle 2F_0(s)+r'(s)u(s),u(s)\rangle d s\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^t e^{-r(s)}\|\Phi(s)\|_{\mathcal{L}_{Q}}^2d s\right], \end{align} for all $t\in[0,T]$. Also, it should be noted that the initial value $u_n(0)$ converges to $u_0$ strongly in $\mathrm{L}^2(\Omega;\mathrm{L}^2(\mathcal{O}))$, that is, \begin{align}\label{4.46} \lim_{n\to\infty}\mathbb{E}\left[\|u_n(0)-u_0\|_{\mathrm{L}^2}^2\right]=0. \end{align} \noindent\textbf{Step (3):} \emph{Minty-Browder technique and global strong solution.} Let us now prove that $F(u(\cdot))=F_0(\cdot)$ and $\sigma(\cdot,u(\cdot))=\Phi(\cdot)$. For $v\in\mathrm{L}^2(\Omega;\mathrm{L}^{2}(0,T;\mathrm{H}_m))$ with $m<n$, let us define \begin{align}\label{4.47} r(t)=\frac{\alpha^2}{\nu}\int_0^t\|v(s)\|_{\mathrm{L}^{\infty}}^2ds+[2\beta(1+\gamma+\gamma^2)+L]t, \end{align} so that \begin{align*} r'(t)=\frac{\alpha^2}{\nu}\|v(t)\|_{\mathrm{L}^{\infty}}^2+[2\beta(1+\gamma+\gamma^2)+L], \ \text{ a.e.} \end{align*} From the local monotonicity result (see (\ref{3.11y})), we have \begin{align}\label{4.48} &\mathbb{E}\bigg[\int_0^{T}e^{-r(t)}\Big(2\langle F(v(t))- F(u_n(t)),v(t)-u_n(t)\rangle +r'(t)\left(v(t)-u_n(t),v(t)-u_n(t)\right)\Big)d t\bigg]\nonumber\\&\geq \mathbb{E}\left[\int_0^{T}e^{-r(t)}\|\sigma_n(t, v(t)) - \sigma_n(t,u_n(t))\|^2_{\mathcal{L}_{Q}}d t\right]. \end{align} In (\ref{4.48}), rearranging the terms and using energy equality (\ref{4.39}), we get \begin{align}\label{4.49} &\mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F(v(t))+r'(t)v(t),v(t)-u_n(t)\rangle d t\right]\nonumber\\&\quad-\mathbb{E}\left[\int_0^{T}e^{-r(t)}\|\sigma_n(t, v(t))\|^2_{\mathcal{L}_{Q}} d t\right]+2\mathbb{E}\left[\int_0^{T}e^{-r(t)}\left(\sigma_n(t, v(t)), \sigma_n(t,u_n(t))\right)_{\mathcal{L}_{Q}}d t\right]\nonumber\\&\geq \mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F(u_n(t))+r'(t)u_n(t),v(t)\rangle d t\right]\nonumber\\&\quad-\mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F(u_n(t))+r'(t)u_n(t),u_n(t)\rangle d t\right]+\mathbb{E}\left[\int_0^{T}e^{-r(t)}\| \sigma_n(t,u_n(t))\|^2_{\mathcal{L}_{Q}}d t\right]\nonumber\\&=\mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F(u_n(t))+r'(t)u_n(t),v(t)\rangle d t\right] +\mathbb{E}\left[e^{-r(T)}\|u_n(T)\|_{\mathrm{L}^2}^2-\|u_n(0)\|_{\mathrm{L}^2}^2\right]. \end{align} We use the weak convergence in (\ref{4.43z}) and Lebesgue dominated convergence theorem to deduce that \begin{align}\label{4.50} &\mathbb{E}\Bigg[\int_0^{T}e^{-r(t)}\left(2\left(\sigma_n(t, v(t)), \sigma_n(t,u_n(t))\right)_{\mathcal{L}_{Q}}-\|\sigma_n(t, v(t))\|^2_{\mathcal{L}_{Q}}\right)d t\Bigg]\nonumber\\& \to \mathbb{E}\left[\int_0^{T}e^{-r(t)}\left(2\left(\sigma(t, v), \Phi(t)\right)_{\mathcal{L}_{Q}}-\|\sigma(t, v(t))\|^2_{\mathcal{L}_{Q}}\right)d t\right], \end{align} as $n\to\infty$. On taking liminf on both sides of (\ref{4.49}), and using (\ref{4.50}), we obtain \begin{align}\label{4.51} &\mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F(v(t))+r'(t)v(t),v(t)-u(t)\rangle d t\right]\nonumber\\&\quad-\mathbb{E}\left[\int_0^{T}e^{-r(t)}\|\sigma(t, v(t))\|^2_{\mathcal{L}_{Q}} d t\right]+2\mathbb{E}\left[\int_0^{T}e^{-r(t)}\left(\sigma(t, v(t)), \Phi(t)\right)_{\mathcal{L}_{Q}}d t\right]\\&\geq\mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F_0(t)+r'(t)u(t),v(t)\rangle d t\right] +\liminf_{n\to\infty}\mathbb{E}\left[e^{-r(T)}\|u^n(T)\|_{\mathrm{L}^2}^2-\|u^n(0)\|_{\mathrm{L}^2}^2\right].\nonumber \end{align} Using the lower semicontinuity property of the $\mathrm{L}^2$-norm and (\ref{4.46}), the second term on the right hand side of the inequality (\ref{4.51}) satisfies the following inequality: \begin{align}\label{4.52} &\liminf_{n\to\infty}\mathbb{E}\left[e^{-r(T)}\|u^n(T)\|_{\mathrm{L}^2}^2-\|u^n(0)\|_{\mathrm{L}^2}^2\right]\geq \mathbb{E}\left[e^{-r(T)}\|u(T)\|^2_{\mathrm{L}^2}-\|u_0\|^2_{\mathrm{L}^2}\right]. \end{align} Hence by using the energy equality (\ref{4.45}) and (\ref{4.52}) in (\ref{4.51}), we find \begin{align}\label{4.53} &\mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F(v(t))+r'(t)v(t),v(t)-u(t)\rangle d t\right]\nonumber\\&\geq\mathbb{E}\left[\int_0^{T}e^{-r(t)}\|\sigma(t, v(t))\|^2_{\mathcal{L}_{Q}} d t\right]-2\mathbb{E}\left[\int_0^{T}e^{-r(t)}\left(\sigma(t, v(t)), \Phi(t)\right)_{\mathcal{L}_{Q}}d t\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}e^{-r(t)}\|\Phi(t)\|^2_{\mathcal{L}_{Q}}d t\right] +\mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F_0(t)+r'(t)u(t),v(t)-u(t)\rangle d t\right]. \end{align} Thus, by rearranging the terms in (\ref{4.53}), we finally obtain \begin{align}\label{4.54} &\mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F(v(t))-2F_0(t)+r'(t)(v(t)-u(t)),v(t)-u(t)\rangle d t\right]\nonumber\\&\geq \mathbb{E}\Bigg[\int_0^{T}e^{-r(t)}\|\sigma(t, v(t))-\Phi(t)\|^2_{\mathcal{L}_{Q}} d t\Bigg]\geq 0. \end{align} The estimate (\ref{4.54}) holds true for any $v\in\mathrm{L}^2(\Omega;\mathrm{L}^{2}(0,T;\mathrm{H}_m))$ and for any $m\in\mathbb{N}$, since the estimate (\ref{4.54}) is independent of $m$ and $n$. It can be easily seen by a density argument that the inequality (\ref{4.54}) remains true for any $$v\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^{\infty}(0,T;\mathrm{L}^2(\mathcal{O})))\cap\mathrm{L}^2(\Omega;\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O})))\cap\mathrm{L}^4(\Omega;\mathrm{L}^4(0,T;\mathrm{L}^4(\mathcal{O})))=:\mathcal{J},$$ for $p>2$. Indeed, for any $v\in\mathcal{J}$, there exists a strongly convergent subsequence $v_m\in\mathcal{J}$ that satisfies the inequality (\ref{4.54}). Taking $v(\cdot)=u(\cdot)$ in (\ref{4.54}) immediately gives $\sigma(\cdot,v(\cdot))=\Phi(\cdot)$. Let us now take $v(\cdot)=u(\cdot)+\lambda w(\cdot)$, $\lambda>0$, where $w\in\mathrm{L}^{4}(\Omega;\mathrm{L}^{\infty}(0,T;\mathrm{L}^2(\mathcal{O})))\cap\mathrm{L}^2(\Omega;\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O}))),$ and substitute for $v$ in (\ref{4.54}) to find \begin{align}\label{4.55} \mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle 2F(u(t)+\lambda w(t))-2F_0(t)+r'(t)\lambda w(t),\lambda w(t)\rangle d t\right]\geq 0. \end{align} Let us divide the inequality (\ref{4.55}) by $\lambda$, use the hemicontinuity property of $F(\cdot)$, and let $\lambda\to 0$ to obtain \begin{align}\label{4.56} \mathbb{E}\left[\int_0^{T}e^{-r(t)}\langle F(u(t))-F_0(t),w(t)\rangle d t\right]\geq 0. \end{align} The final term from (\ref{4.55}) tends to $0$ as $\lambda\to0$, since \begin{align}\label{4.57} &\mathbb{E}\left[\int_0^{T}e^{-r(t)}r'(t)\left(w(t),w(t)\right)d t\right]\nonumber\\&=\mathbb{E}\left[\int_0^{T}e^{-r(t)}\left\{\frac{\alpha^2}{\nu}\|v(t)\|_{\mathrm{L}^{\infty}}^2+2\beta(1+\gamma+\gamma^2)+L\right\}\|w(t)\|_{\mathrm{L}^2}^2d t\right]\nonumber\\&\leq \frac{C\alpha^2}{\nu}\mathbb{E}\left[\sup_{t\in[0,T]}\|w(t)\|_{\mathrm{L}^2}^2\int_0^T\|v(t)\|_{\mathrm{H}_0^1}^2dt\right]+[2\beta(1+\gamma+\gamma^2)+L]\mathbb{E}\left[\int_0^T\|w(t)\|_{\mathrm{L}^2}^2dt\right]\nonumber\\&\leq \frac{C\alpha^2}{\nu}\left\{\mathbb{E}\left[\sup_{t\in[0,T]}\|w(t)\|_{\mathrm{L}^2}^4\right]\right\}^{1/2}\left\{\mathbb{E}\left[\int_0^T\|v(t)\|_{\mathrm{H}_0^1}^2dt\right]^2\right\}^{1/2}\nonumber\\&\quad+[2\beta(1+\gamma+\gamma^2)+L]\mathbb{E}\left[\int_0^T\|w(t)\|_{\mathrm{L}^2}^2dt\right]<+\infty, \end{align} by using \eqref{3.26} and \eqref{energy2}. Thus from (\ref{4.56}), we have $\mathrm{F}(u(t))=\mathrm{F}_0(t)$ and hence $u(\cdot)$ is a strong solution of the system (\ref{abstract}) and $u\in\mathcal{J}$. It is clear that $u(\cdot)$ has a modification, whose $\mathscr{F}_t$-adapted paths are continuous with trajectories in $\mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O}))$, $\mathbb{P}$-a.s. (see \cite{Me}). \vskip 0.2cm \noindent\textbf{Step (4):} \emph{Pathwise uniqueness.} Let $u_1(\cdot)$ and $u_2(\cdot)$ be two solutions of the system (\ref{abstract}). For $N>0$, let us define \begin{align*} \tau_N^1=\inf_{0\leq t\leq T}\Big\{t:\|u_1(t)\|_{\mathrm{L}^2}\geq N\Big\},\ \tau_N^2=\inf_{0\leq t\leq T}\Big\{t:\|u_2(t)\|_{\mathrm{L}^2}\geq N\Big\}\text{ and }\tau_N=\tau_N^1\wedge\tau_N^2. \end{align*} One can show that $\tau_N\to T$ as $N\to\infty$, $\mathbb{P}$-a.s. Let us take $w(\cdot)=u_1(\cdot)-u_2(\cdot)$ and $\widetilde{\sigma}(\cdot)=\sigma(\cdot,u_1(\cdot))-\sigma(\cdot,u_2(\cdot))$. Then, $w(\cdot)$ satisfies the system \begin{equation} \left\{ \begin{aligned} dw(t)&=\left[-\nu Aw(t)-\alpha (B(u_1(t))-B(u_2(t)))+\beta(c(u_1(t))-c(u_2(t)))\right]d t\\&\quad+\widetilde{\sigma}(t)dW(t),\\ w(0)&=w_0. \end{aligned} \right. \end{equation} We apply the infinite dimensional It\^o's formula (see \cite{IG}, Theorem 6.1, \cite{Me}) to the process $e^{-\rho(t)}\|w(t)\|_{\mathrm{L}^2}^2,$ where \begin{align}\label{3pp51}\rho(t)=\frac{C\alpha^2}{\nu}\int_0^t\|u_2(s)\|_{\mathrm{H}_0^1}^2ds,\text{ so that }\ \rho'(t)= \frac{C\alpha^2}{\nu}\|u_2(t)\|_{\mathrm{H}_0^1}^2, \ \text{ a.e.},\end{align} to find \begin{align}\label{4.59} &e^{-\rho(t\wedge\tau_N)}\|w(t\wedge\tau_N)\|_{\mathrm{L}^2}^2+2\nu\int_0^{t\wedge\tau_N}e^{-\rho(s)}\|w(s)\|_{\mathrm{H}_0^1}^2d s\nonumber\\&=\|w(0)\|_{\mathrm{L}^2}^2 -\int_0^{t\wedge\tau_N^n}\rho'(s)e^{-\rho(s)}\|w(s)\|_{\mathrm{L}^2}^2d s-2\alpha\int_0^{t\wedge\tau_N}e^{-\rho(s)}(u_2(s),w(s)\partial_xw(s))ds\nonumber\\&\quad +2\beta\int_0^{t\wedge\tau_N}e^{-\rho(s)}(c(u_1(s))-c(u_2(s)),u_1(s)-u_2(s))ds+\int_0^{t\wedge\tau_N}e^{-\rho(s)}\|\widetilde\sigma(s)\|_{\mathcal{L}_{Q}}^2d s\nonumber\\&\quad+2\int_0^{t\wedge\tau_N}e^{-\rho(s)}\left(\widetilde{\sigma}(s)dW(s),w(s)\right), \end{align} where we used \eqref{2.10} and \eqref{2.11}. We estimate the terms $-2\alpha(u_2,w\partial_xw)$ using H\"older's, Gagliardo-Nirenberg interpolation and Young's inequalities as \begin{align}\label{3p51} -2\alpha(u_2,w\partial_xw)&\leq 2\alpha\|u_2\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^2}\|\partial_xw\|_{\mathrm{L}^2}\leq\nu\|w\|_{\mathrm{H}_0^1}^2+\frac{C\alpha^2}{\nu}\|u_2\|_{\mathrm{H}_0^1}^2\|w\|_{\mathrm{L}^2}^2. \end{align} Using \eqref{3p51} and \eqref{2.11} in \eqref{4.59}, we get \begin{align}\label{4.60} &e^{-\rho(t\wedge\tau_N)}\|w(t\wedge\tau_N)\|_{\mathrm{L}^2}^2+\nu\int_0^{t\wedge\tau_N}e^{-\rho(s)}\|w(s)\|_{\mathrm{H}_0^1}^2d s\nonumber\\&\leq \|w(0)\|_{\mathrm{L}^2}^2-\int_0^{t\wedge\tau_N^n}\rho'(s)e^{-\rho(s)}\|w(s)\|_{\mathrm{L}^2}^2d s +2\beta((1+\gamma+\gamma^2)\int_0^{t\wedge\tau_N}e^{-\rho(s)}\|w(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&\quad+\frac{C\alpha^2}{\nu}\int_0^{t\wedge\tau_N}\|u_2(s)\|_{\mathrm{H}_0^1}^2\|w(s)\|_{\mathrm{L}^2}^2ds \nonumber\\&\quad+\int_0^{t\wedge\tau_N}e^{-\rho(s)}\|\widetilde\sigma(s)\|_{\mathcal{L}_{Q}}^2d s+2\int_0^{t\wedge\tau_N}e^{-\rho(s)}\left(\widetilde{\sigma}(s)dW(s),w(s)\right). \end{align} Note that the final term in the right hand side of the inequality (\ref{4.60}) is a local martingale. Let us take expectation in (\ref{4.60}), and use the Hypothesis \ref{hyp} (H.2) to get \begin{align}\label{4.62} &\mathbb{E}\left[e^{-\rho(t\wedge\tau_N)}\|w(t\wedge\tau_N)\|_{\mathrm{L}^2}^2\right]\nonumber\\&\leq \mathbb{E}\left[\|w(0)\|_{\mathrm{L}^2}^2\right]+[2\beta(1+\gamma+\gamma^2)+L]\mathbb{E}\left[\int_0^{t\wedge\tau_N}e^{-\rho(s)}\|w(s)\|_{\mathrm{L}^2}^2d s\right]. \end{align} We apply Gronwall's inequality in (\ref{4.62}) to obtain \begin{align}\label{4.63} &\mathbb{E}\left[e^{-\rho(t\wedge\tau_N)}\|w(t\wedge\tau_N)\|_{\mathrm{L}^2}^2\right]\leq \mathbb{E}\left[\|w(0)\|_{\mathrm{L}^2}^2\right]e^{[2\beta(1+\gamma+\gamma^2)+L]T}. \end{align} Thus the initial data $u_1(0)=u_2(0)=u_0$ leads to $w(t\wedge\tau_N)=0$, $\mathbb{P}$-a.s. But the fact that $\tau_N\to T$, $\mathbb{P}$-a.s., gives $w(t)=0$ and hence $u_1(t) = u_2(t)$, for all $t \in[0, T ]$, $\mathbb{P}$-a.s., and the uniqueness follows. \end{proof} \begin{remark}[Regularity] Let us now assume $u_0\in\mathrm{L}^2(\Omega;\mathrm{H}_0^1(\Omega))\cap\mathrm{L}^{2p}(\Omega;\mathrm{L}^2(\mathcal{O})),$ for $p\geq 4$, and there exist a positive constant $\widetilde{K}$ such that, for all $u\in \mathrm{H}_0^1(\mathcal{O})$ and $t\in[0,T]$, \begin{equation*} \|A^{1/2}\sigma(t, u)\|^{2}_{\mathcal{L}_{Q}} \leq \widetilde{K}\left(1 +\|u\|_{\mathrm{H}_0^1}^{2}\right). \end{equation*} Let us now apply finite dimensional It\^o's formula to the process $\|A^{1/2}u_n(\cdot)\|_{\mathrm{L}^2}^2$ to obtain \begin{align}\label{3.57} &\|A^{1/2}u_n(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^t\|A u_n(s)\|_{\mathrm{H}_0^1}^2d s\nonumber\\&=\|A^{1/2}u_0\|_{\mathrm{L}^2}^2-2\alpha\int_0^t(B_n(u_n(s)),Au_n(s))ds+2\beta\int_0^t(c_n(u_n(s)),Au_n(s))ds\nonumber\\&\quad+\int_0^t\mathop{\mathrm{Tr}}(A^{1/2}\sigma(s,u_n(s)) Q(A^{1/2}\sigma(s,u_n(s)))^*) ds+2\int_0^t(\sigma(s,u_n(s)) d W(s),Au_n(s)). \end{align} We estimate $-2\alpha(B(u_n),Au_n)$ and $2\beta(c(u_n),Au_n)$ using H\"older's, Gagliardo-Nirenberg and Young's inequalities as \begin{align}\label{3.58} -2\alpha(B(u_n),Au_n)&\leq 2\alpha\|u_n\partial_xu_n\|_{\mathrm{L}^2}\|Au_n\|_{\mathrm{L}^2}\leq 2\alpha\|u_n\|_{\mathrm{L}^6}\|\partial_xu_n\|_{\mathrm{L}^3}\|Au_n\|_{\mathrm{L}^2}\nonumber\\&\leq 2C\alpha\|u_n\|_{\mathrm{L}^6}^{3/2}\|Au_n\|_{\mathrm{L}^2}^{3/2}\leq\frac{\nu}{2}\|Au_n\|_{\mathrm{L}^2}^2+\frac{27C\alpha^4}{2\nu^3}\|u_n\|_{\mathrm{L}^6}^6,\\ 2\beta(c(u_n),Au_n)&=2\beta(1+\gamma)(u_n^2,Au_n)-\beta\gamma(u_n,Au_n)-\beta(u_n^3,Au_n)\nonumber\\&\leq-2\beta\gamma\|\partial_xu_n\|_{\mathrm{L}^2}^2-2\beta\|u_n\partial_xu_n\|_{\mathrm{L}^2}^2+2\beta(1+\gamma)\|u_n\|_{\mathrm{L}^4}^2\|Au_n\|_{\mathrm{L}^2}\nonumber\\&\leq- 2\beta\gamma\|\partial_xu_n\|_{\mathrm{L}^2}^2-2\beta\|u_n\partial_xu_n\|_{\mathrm{L}^2}^2+\frac{\nu}{2}\|Au_n\|_{\mathrm{L}^2}^2+\frac{2\beta^2(1+\gamma)^2}{\nu}\|u_n\|_{\mathrm{L}^4}^4. \label{3p59} \end{align} Thus, using \eqref{3.58}-\eqref{3p59} in \eqref{3.57}, taking supremum over time from $0$ to $T$, and then taking expectation, we get \begin{align}\label{3p60} & \mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{H}_0^1}^2+\nu\int_0^T\|A u_n(t)\|_{\mathrm{L}^2}^2d t+2\beta\gamma\int_0^T\|u_n(t)\|_{\mathrm{H}_0^1}^2dt+2\beta\int_0^T\|u_n(t)\partial_xu_n(t)\|_{\mathrm{L}^2}^2dt\right]\nonumber\\&\leq\mathbb{E}\left[\|u_0\|_{\mathrm{H}_0^1}^2\right]+\frac{27C\alpha^4}{2\nu^3}\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^6}^6dt\right]+\frac{2\beta^2(1+\gamma)^2}{\nu}\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^4}^4dt\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^T\|A^{1/2}\sigma(t,u_n(t))\|_{\mathcal{L}_Q}^2dt\right]+2\mathbb{E}\left[\sup_{t\in[0,T]}\left|\int_0^t(A^{1/2}\sigma(s,u_n(s)) d W(s),A^{1/2}u_n(s))\right|\right]. \end{align} Let us take the final term from the right hand side of the inequality (\ref{3p60}) and use Burkholder-Davis-Gundy, H\"{o}lder's and Young's inequalities to get \begin{align}\label{3p61} &2\mathbb{E}\left[\sup_{t\in[0,T]}\left|\int_0^{t}\left(A^{1/2}\sigma(s,u_n(s))dW(s),A^{1/2}u_n(s)\right)\right|\right]\nonumber\\&\leq 2\sqrt{3}\mathbb{E}\left[\int_0^{T}\|A^{1/2}\sigma(t,u_n(t))\|_{\mathcal{L}_{Q}}^2\|A^{1/2}u_n(t)\|_{\mathrm{L}^2}^2d t\right]^{1/2}\nonumber\\&\leq 2 \sqrt{3}\mathbb{E}\left[\sup_{t\in[0,T]}\|A^{1/2}u_n(t)\|_{\mathrm{L}^2}\left(\int_0^{T}\|A^{1/2}\sigma(t,u_n(t))\|_{\mathcal{L}_{Q}}^2d t\right)^{1/2}\right]\nonumber\\&\leq \frac{1}{2} \mathbb{E}\Bigg[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{H}_0^1}^2\Bigg]+6\mathbb{E}\Bigg[\int_0^{T}\|A^{1/2}\sigma(t,u_n(t))\|^2_{\mathcal{L}_{Q}}d t\Bigg]. \end{align} Substituting \eqref{3p61} in \eqref{3p60}, we obtain \begin{align}\label{3p62} & \mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{H}_0^1}^2+2\nu\int_0^T\|A u_n(t)\|_{\mathrm{L}^2}^2d t+4\beta\gamma\int_0^T\|u_n(t)\|_{\mathrm{H}_0^1}^2dt+4\beta\int_0^T\|u_n(t)\partial_xu_n(t)\|_{\mathrm{L}^2}^2dt\right]\nonumber\\& \leq 2\mathbb{E}\left[\|u_0\|_{\mathrm{H}_0^1}^2\right]+\frac{27C\alpha^4}{\nu^3}\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^6}^6dt\right]+\frac{4\beta^2(1+\gamma)^2}{\nu}\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^4}^4dt\right]\nonumber\\&\quad+14\widetilde{K}T+14\widetilde{K}\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{H}_0^1}^2dt\right]. \end{align} Applying Gronwall's inequality in \eqref{3p62} gives \begin{align}\label{3p63} \mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{H}_0^1}^2\right]&\leq \bigg\{2\mathbb{E}\left[\|u_0\|_{\mathrm{H}_0^1}^2\right]+14\widetilde{K}T+\frac{4\beta^2(1+\gamma)^2}{\nu}\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^4}^4dt\right]\nonumber\\&\qquad+\frac{27C\alpha^4}{\nu^3}\mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^6}^6dt\right]\bigg\}e^{14\widetilde{K}T}. \end{align} Using Gagliardo-Nirenberg and H\"older's inequalities, one can easily get \begin{align*} \mathbb{E}\left[\int_0^T\|u_n(t)\|_{\mathrm{L}^6}^6dt\right]\leq& C\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{L}^2}^4\int_0^T\|\partial_xu_n(t)\|_{\mathrm{L}^2}^2dt\right]\nonumber\\&\leq C\left\{\mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{L}^2}^8\right]\right\}^{1/2}\left\{\mathbb{E}\left[\left(\int_0^T\|u_n(t)\|_{\mathrm{H}_0^1}^2\right)^2\right]\right\}^{1/2}<+\infty, \end{align*} whenever $u_0\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^2(\mathcal{O}))$, for $p\geq 4$. Combining \eqref{3p62}-\eqref{3p62}, we obtain \begin{align} \mathbb{E}\left[\sup_{t\in[0,T]}\|u_n(t)\|_{\mathrm{H}_0^1}^2+2\nu\int_0^T\|A u_n(t)\|_{\mathrm{L}^2}^2d t\right]\leq C\left(\mathbb{E}\left[\|u_0\|_{\mathrm{H}_0^1}^2\right], \mathbb{E}\left[\|u_0\|_{\mathrm{L}^2}^{2p},\beta,\gamma,\nu,\alpha,K,\widetilde{K},T\right]\right), \end{align} for $p\geq 4$. Thus, using the Banach-Alaoglu theorem, we can extract a subsequence such that \begin{equation} \left\{ \begin{aligned} u_n&\xrightarrow{w^*}u\ \text{ in }\ \mathrm{L}^{2}(\Omega;\mathrm{L}^{\infty}(0,T;\mathrm{H}_0^1(\Omega))),\\ u_n&\xrightarrow{w}u\ \text{ in }\ \mathrm{L}^2(\Omega;\mathrm{L}^2(0,T;\mathrm{D}(A))). \end{aligned} \right. \end{equation} Since $u(\cdot)$ is the unique strong solution of the system \eqref{abstract}, we obtain the regularity of $u_n(\cdot)$ as $$u\in\mathrm{L}^2(\Omega;\mathrm{L}^{\infty}(0,T;\mathrm{H}_0^1(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}^2(\mathcal{O}))),$$ and one can prove that $u$ has a continuous modification in $\mathrm{C}([0,T];\mathrm{H}_0^1(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}^2(\mathcal{O}))$, $\mathbb{P}$-a.s. \end{remark} \iffalse \subsection{Non-rectangular domain} As in \cite{YBBKS,MTM1}, one can consider the following stochastic Burgers-Huxley equation: \begin{equation}\label{383} \left\{\begin{aligned} du(t)&=[\nu\partial_{xx}u(t)-\alpha u(t)\partial_xu(t)+\beta u(t)(1-u(t))(u(t)-\gamma)]dt\\&\quad+\sigma(t,u(t))dW(t), \ t\in(0,T), \\ u(x,t)\big|_{x=\varphi_i(t)}&=0, \ \text{ for }\ i=1,2, \ t\in(0,T), \\ u(x,0)&=u_0(x), \ x\in[\varphi_1(0),\varphi_2(0)], \end{aligned}\right. \end{equation} in a non rectangular domain $$\widetilde{R}=\left\{(x,t)\in\mathbb{R}^2: \varphi_1(t)<x<\varphi_2(t),\ 0<t<T\right\},$$ where $\varphi_1,\varphi_2\in\mathrm{C}^1(0,T)$ (see \cite{HRC} also) are deterministic functions. We assume that $\varphi_1(t)<\varphi_2(t)$, for all $t\in(0,T)$. In order to establish the existence and uniqueness of the strong solution to \eqref{383}, one has to impose the assumption \begin{align}|\varphi'(t)|\leq C, \ \text{ for all }\ t\in[0,T],\end{align} where $C$ is a positive constant and $\varphi(t)=\varphi_1(t)-\varphi_2(t)$, for all $t\in[0,T]$. Using the change of variables \begin{align}(x,t)\mapsto(y,t)=\left(\frac{x-\varphi_1(t)}{\varphi_2(t)-\varphi_1(t)},t\right),\end{align} the domain $\widetilde{R}$ can be transformed into the rectangle $R=(0,1)\times(0,T)$. As in Theorems \ref{exis}, one can obtain the existence, uniqueness of the global strong solution of the following stochastic semilinear parabolic problem: \begin{equation}\label{386} \left\{\begin{aligned} du(t)&=[\nu(t)\partial_{xx}u(t)-\alpha(t) u(t)\partial_xu(t)+\eta(x,t)\partial_xu(t)+\beta u(t)(1-u(t))(u(t)-\gamma)]dt\\&\quad+\sigma(t,u(t))dW(t), \ \text{ for } \ t\in(0,T), \\ u(t,0)&=u(t,1)=0, \ \text{ for }t\in(0,T), \\ u(0,x)&=u_0(x), \ x\in\mathcal{O}, \end{aligned}\right. \end{equation} for $t\in(0,T)$, where the functions $\nu$ and $\alpha$ depend only on $t$, and the function $\eta$ depends on $x$ and $t$. We assume that there exists positive constants $\{\nu_i\}_{i=1}^2$, $\{\alpha_i\}_{i=1}^2$ and $\eta_1$ such that \begin{equation}\label{387} \left\{ \begin{aligned} &\nu_1\leq\nu(t)\leq\nu_2, \ \text{ for all }\ t\in(0,T),\\ &\alpha_1\leq \alpha(t)\leq \alpha_2, \ \text{ for all }\ t\in(0,T), \\ &|\partial_x\eta(x,t)|\leq \eta_1 \ \text{ or } \ |\eta(x,t)|\leq \eta_1, \ \text{ for all }\ (x,t)\in\mathcal{O}\times(0,T). \end{aligned} \right. \end{equation} Substituting $u(x,t)=v(y,t)$, $W(t,x)=W(t,y)$ and $\sigma(t,u(x,t))={\sigma}(t,u(y,t))$ in \eqref{383}, we find \begin{equation}\label{388} \left\{\begin{aligned} dv(t)&=\left[\frac{\nu}{\varphi^2(t)}\partial_{yy}v(t)-\frac{\alpha}{\varphi(t)} v(t)\partial_yv(t)+\eta(t,y)\partial_yv(t)\right]dt\\&\quad+\beta v(y,t)(1-v(y,t))(v(y,t)-\gamma)dt +\sigma(t,v(t))dW(t), \\ v(0,t)&=v(1,t), \ t\in(0,T),\\ v(0,y)&=v_0(y)=u_0(\varphi_1(0)+\varphi(0)y), \ y\in[0,1], \end{aligned}\right. \end{equation} where $$\eta(t,y)=-\frac{y\varphi'(t)+\varphi_1'(t)}{\varphi(t)}.$$ Taking $\alpha(t)=\frac{\alpha}{\varphi(t)}$ and $\nu(t)=\frac{\nu}{\varphi^2(t)}$, the the problem \eqref{388} can be reduced to \eqref{386} (where $u(x,t)$ needs to be replaced by $v(y,t)$). Note that the change of variables preserves the spaces and the hypotheses \eqref{387} are satisfied. Thus, the global solvability results of the system \eqref{386} easily implies the global solvability of the system \eqref{388}. \fi \section{The inviscid limit}\label{sec5}\setcounter{equation}{0} In this section, we discuss the inviscid limit of the equation \eqref{abstract} as $\beta\to 0$. Let $u(\cdot)$ be the unique strong solution of the system \eqref{abstract}. We consider the following stochastic Brugers' equation: \begin{equation}\label{48} \left\{ \begin{aligned} dv(t)&=[- Av(t)-\alpha B(v(t))]dt+\sigma(t,v(t))dW(t), \ t\in(0,T),\\ v(0)&=u_0\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^{2}(\mathcal{O})). \end{aligned} \right. \end{equation} The existence and uniqueness of strong solution of the above system can be established in a similar way as in section \ref{sec3} (see \cite{GDP} also). For $u_0\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^2(\mathcal{O}))$, $p>2$, the unique strong solution of the system \eqref{48} satisfies the energy inequality: \begin{align}\label{4.9} &\mathbb{E}\left[\sup_{t\in[0,T]}\|v(t)\|_{\mathrm{L}^2}^2+4\nu\int_0^{T}\|v(t)\|_{\mathrm{H}_0^1}^2d t\right]\leq (2\|u_0\|_{\mathrm{L}^2}^2+14KT)e^{28KT}. \end{align} Also $u(\cdot)$ has the regularity $$u\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^{\infty}(0,T;\mathrm{L}^2(\mathcal{O})))\cap\mathrm{L}^{2}(\Omega;\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O}))),$$ with a continuous modification having $u\in\mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O})),$ $\mathbb{P}$-a.s. \begin{proposition}\label{prop5.1} Let $u(\cdot)$ be the unique strong solution of the stochastic Brugers-Huxley equation \eqref{abstract}. As $\beta\to 0$, the strong solution $v(\cdot)$ of the system \eqref{abstract} tends to the strong solution of the stochastic Brugers equation \eqref{48}. \end{proposition} \begin{proof} Let us define $w=u-v$, then $w$ satisfies: \begin{equation}\label{49} \left\{\begin{aligned} dw(t)&=[- Aw(t)-\alpha (B(u(t))-B(v(t)))+\beta c(u(t))]dt\\&\quad +(\sigma(t,u(t))-\sigma(t,v(t)))dW(t), \ t\in(0,T),\\ w(0)&=0. \end{aligned}\right. \end{equation} Applying infinite dimensional It\^o's formula to the process $e^{-\rho(\cdot)}\|w(\cdot)\|_{\mathrm{L}^2}$, we find \begin{align}\label{51} &e^{-\rho(t)}\|w(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^te^{-\rho(s)}\|\partial_x w(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&=\|w(0)\|_{\mathrm{L}^2}^2-\int_0^{t}\rho'(s)e^{-\rho(s)}\|w(s)\|_{\mathrm{L}^2}^2d s-2\alpha\int_0^te^{-\rho(s)}(B(u(s))-B(v(s)),w(s))ds \nonumber\\&\quad+2\beta\int_0^te^{-\rho(s)}(c(u(s)),w(s))ds+\int_0^{t}e^{-\rho(s)}\|\widetilde\sigma(s)\|_{\mathcal{L}_{Q}}^2d s+2\int_0^{t}e^{-\rho(s)}\left(\widetilde{\sigma}(s)dW(s),w(s)\right), \end{align} where $\widetilde{\sigma}(\cdot)=\sigma(\cdot,u(\cdot))-\sigma(\cdot,v(\cdot))$ and $\rho(\cdot)$ is defined in \eqref{3pp51}. A calculation similar to \eqref{3p51} gives \begin{align}\label{4p5} 2\alpha|(B(u)-B(v),w)|\leq{\nu}\|w\|_{\mathrm{H}_0^1}^2+\frac{C\alpha^2}{\nu}\|u\|_{\mathrm{H}_0^1}^2\|w\|_{\mathrm{L}^2}^2. \end{align} Applying H\"older's and Young's inequalities, we estimate $2\beta(c(u),w)$ as \begin{align}\label{4p6} 2\beta|(c(u),w)|&\leq 2\beta|(1+\gamma)(u^2,w)-\gamma(u,w)-(u^3,w)|\nonumber\\&\leq 2\beta(1+\gamma)\|u\|_{\mathrm{L}^4}^2\|w\|_{\mathrm{L}^2}+\beta\gamma\|u\|_{\mathrm{L}^2}\|w\|_{\mathrm{L}^2}+\beta\|w\|_{\mathrm{L}^{\infty}}\|u\|_{\mathrm{L}^3}^3 \nonumber\\&\leq\frac{\beta}{2}\|w\|_{\mathrm{L}^2}^2+2\beta(1+\gamma)^2\|u\|_{\mathrm{L}^4}^4+\frac{\beta}{2}\|w\|_{\mathrm{L}^2}^2+\frac{\beta\gamma^2}{2}\|u\|_{\mathrm{L}^2}^2+\frac{\nu}{2}\|w\|_{\mathrm{H}_0^1}^2+\frac{C\beta^2}{2\nu}\|u\|_{\mathrm{L}^3}^6. \end{align} Combining \eqref{4p5} and \eqref{4p6}, substituting it in \eqref{51} and then taking expectation, the fact that the final term in the right hand side of the equality \eqref{51} is a martingale, we obtain \begin{align}\label{4p7} &\mathbb{E}\left[e^{-\rho(t)}\|w(t)\|_{\mathrm{L}^2}^2+\nu\int_0^te^{-\rho(s)}\|\partial_x w(s)\|_{\mathrm{L}^2}^2ds\right]\nonumber\\&\leq -\mathbb{E}\left[\int_0^{t}\rho'(s)e^{-\rho(s)}\|w(s)\|_{\mathrm{L}^2}^2d s\right]+\frac{C\alpha^2}{2\nu}\mathbb{E}\left[\int_0^te^{-\rho(s)}\|u(s)\|_{\mathrm{H}_0^1}^2\|w(s)\|_{\mathrm{L}^2}^2ds\right]\nonumber\\&\quad+\beta\mathbb{E}\left[\int_0^te^{-\rho(s)}\|w(s)\|_{\mathrm{L}^2}^2ds\right]+2\beta(1+\gamma)^2\mathbb{E}\left[\int_0^te^{-\rho(s)}\|u(s)\|_{\mathrm{L}^4}^4ds\right]\nonumber\\&\quad+\frac{\beta\gamma^2}{2}\mathbb{E}\left[\int_0^te^{-\rho(s)}\|u(s)\|_{\mathrm{L}^2}^2ds\right]+\frac{C\beta^2}{2\nu}\mathbb{E}\left[\int_0^te^{-\rho(s)}\|u(s)\|_{\mathrm{L}^2}^2\|u(s)\|_{\mathrm{L}^4}^4ds\right]\nonumber\\&\leq \beta\mathbb{E}\left[\int_0^te^{-\rho(s)}\|w(s)\|_{\mathrm{L}^2}^2ds\right]+2\beta(1+\gamma)^2\mathbb{E}\left[\int_0^T\|u(t)\|_{\mathrm{L}^4}^4dt\right]\nonumber\\&\quad+\frac{\beta\gamma^2}{2}\mathbb{E}\left[\int_0^T\|u(t)\|_{\mathrm{L}^2}^2dt\right]+\frac{C\beta^2}{2\nu}\left\{\mathbb{E}\left[\sup_{t\in[0,T]}\|u(t)\|_{\mathrm{L}^2}^4\right]\right\}^{1/2}\left\{\mathbb{E}\left[\int_0^T\|u(t)\|_{\mathrm{L}^4}^4dt\right]^2\right\}^{1/2}, \end{align} where we used H\"older's inequality. Note that the final term from the right hand side of the inequality is bounded by using \eqref{energy2} and \eqref{3p27}. An application of Gronwall's inequality in \eqref{4p7} yields \begin{align}\label{4p8} \mathbb{E}\left[e^{-\rho(t)}\|w(t)\|_{\mathrm{L}^2}^2\right]&\leq\beta \bigg\{2(1+\gamma)^2\mathbb{E}\left[\int_0^T\|u(t)\|_{\mathrm{L}^4}^4dt\right]+\frac{\gamma^2}{2}\mathbb{E}\left[\int_0^T\|u(t)\|_{\mathrm{L}^2}^2dt\right]\nonumber\\&\quad+\frac{C\beta}{2\nu}\left\{\mathbb{E}\left[\sup_{t\in[0,T]}\|u(t)\|_{\mathrm{L}^2}^4\right]\right\}^{1/2}\left\{\mathbb{E}\left[\int_0^T\|u(t)\|_{\mathrm{L}^4}^4dt\right]^2\right\}^{1/2}\bigg\}e^{\beta t}, \end{align} for all $t\in[0,T]$. Passing $\beta\to 0$ in \eqref{4p8}, we find $u(t)\to v(t)$, for all $t\in[0,T]$, $\mathbb{P}$-a.s. \end{proof} \iffalse \begin{remark} For $\delta=1$ and $\beta=0$, one can obtain the global attractor for viscous Burgers' equation in $\mathrm{L}^2(\mathcal{O})$ (using Proposition \ref{prop4.1} and Remark \ref{rem1.6}, and the compact embedding of $\mathrm{H}_0^1(\mathcal{O})\subset\mathrm{L}^2(\mathcal{O})$, see \cite{Te2,MBC}). A computer assisted proof of the existence of globally attracting fixed points of viscous Burgers equation with constant forcing is obtained in \cite{JCy} (see \cite{JCy1} for nonautonomous forcing). The authors in \cite{NS} studied numerically the long-time dynamics the viscous forced Burgers equation. For $\delta=1$, if we denote the global attractors for the generalized Burgers-Huxley equation as $\mathcal{A}_{\beta}$ and if $\mathcal{A}$ denote the global attarctor for Burgers' equation, then one can show that $$\lim_{\beta\to 0}\mathrm{dist}_{\mathrm{L}^2(\mathcal{O})}(\mathcal{A}_{\beta},\mathcal{A})=0.$$ \end{remark} \begin{remark}\label{rem5.3} The \emph{generalized Burgers-Fisher equation} is a nonlinear parabolic mathematical model describing various phenomena such as gas dynamics, heat conduction, nonlinear optics, chemical physics, etc. The generalized Burgers-Fisher equation is given by (\cite{AMW,OPY}) \begin{align}\label{a1} \frac{\partial u}{\partial t}+\alpha u^{\delta}\frac{\partial u}{\partial x}-\nu\frac{\partial^2u}{\partial x^2}=\beta u(1-u^{\delta}). \end{align} For $\delta=1$, one can obtain the following \emph{Burgers-Fisher equation}: \begin{align}\label{a2} \frac{\partial u}{\partial t}+\alpha u\frac{\partial u}{\partial x}-\nu\frac{\partial^2u}{\partial x^2}=\beta u(1-u), \end{align} which also shows a prototypical model for describing the interaction between the reaction mechanism, convection effect, and diffusion transport. We consider the Generalized Burgers-Fisher equation defined on $\mathcal{O}\times(0,T)=(0,1)\times(0,T)$: \begin{equation}\label{a3} \left\{\begin{aligned} \frac{\partial u(x,t)}{\partial t}+\alpha u(x,t)^{\delta}\frac{\partial u(x,t)}{\partial x}-\nu\frac{\partial^2u(x,t)}{\partial x^2}&=\beta u(x,t)(1-u(x,t)^{\delta})+f(x,t),\\ u(0,t)&=u(1,t)=0, \ t\in(0,T),\\ u(x,0)&=u_0(x), \ x\in\mathcal{O}, \end{aligned}\right. \end{equation} As discussed in Remark \ref{rem3.5}, one can obtain the mild solution $u\in\mathrm{C}([0,T];\mathrm{L}^{\delta+1}(\mathcal{O}))$ of the system system \eqref{a3} given by \begin{align} u(t)=R(t)u_0+\alpha\int_0^tR(t-s)B(u(s))ds+\beta\int_0^tR(t-s)d(u(s))ds, \end{align} where $d(u(s))=u(1-u^{\delta})$, whenever $u_0\in\mathrm{L}^{\delta+1}(\mathcal{O})$ and $f\in\mathrm{L}^2(0,T;\mathrm{L}^{1}(\mathcal{O}))$. \end{remark} \fi Let us now discuss the inviscid limit of the equation \eqref{abstract} as $\alpha\to 0$. We consider the following Huxley equation, for $(x,t)\in\Omega\times(0,T)$: \begin{equation}\label{514} \left\{ \begin{aligned} dz(t)&=[- \nu Az(t)+\beta c(z(t))]dt+\sigma(t,z(t))dW(t), \ t\in(0,T),\\ z(0)&=u_0\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^{2}(\mathcal{O})). \end{aligned}\right. \end{equation} From \eqref{3.7}, for $F(\cdot)=\nu A+\beta c(\cdot)$, it can be easily seen that \begin{align} &\langle F(u)-F(v),u-v\rangle +\beta(1+\gamma+\gamma^2)\|u-v\|_{\mathrm{L}^2}^2\geq 0, \end{align} and hence the operator $F+\lambda I$ is monotone, where $\lambda=\beta(1+\gamma+\gamma^2)$. Since the operator $F+\lambda I:\mathrm{H}_0^1(\mathcal{O})\to\mathrm{H}^{-1}(\mathcal{O})$ is monotone and hemicontinuous, using Theorem Theorem 1.3, Chapter 2,\cite{VB}, the operator $F+\lambda I$ is a maximal monotone operator., Moreover, Corollary 1.2, Chapter 2,\cite{VB} gives us that $R(F+\lambda I)=\mathrm{H}^{-1}(\mathcal{O})$. The existence and uniqueness of strong solution $$z\in\mathrm{L}^{2p}(\Omega;\mathrm{L}^{\infty}(0,T;\mathrm{L}^2(\Omega)))\cap\mathrm{L}^{2}(\Omega;\mathrm{L}^2(0,T;\mathrm{H}_0^1(\Omega)))\cap\mathrm{L}^4(\Omega;\mathrm{L}^4(0,T;\mathrm{L}^4(\mathcal{O})))$$ to the system \eqref{514} can be proved in a similar way as in Theorem \ref{exis}. Moreover, $z(\cdot)$ satisfies: \begin{align}\label{5.15} &\mathbb{E}\left[\sup_{t\in[0,T]}\|z(t)\|_{\mathrm{L}^2}^2+4\nu\int_0^{T}\|z(t)\|_{\mathrm{H}_0^1}^2d t+2\beta\int_0^{T}\|z(t)\|_{\mathrm{L}^4}^4dt\right]\nonumber\\&\quad \leq (2\|u_0\|_{\mathrm{L}^2}^2+14KT)e^{4(\beta(1+\gamma^2)+7K)T}. \end{align} Then, we have the following result: \begin{proposition}\label{prop4.2} Let $u(\cdot)$ be the unique strong solution of the stochastic Brugers-Huxley equation \eqref{abstract}. As $\alpha\to 0$, the strong solution $v(\cdot)$ of the system \eqref{abstract} tends to the strong solution of the stochastic Huxley equation \eqref{514}. \end{proposition} \begin{proof} Let us define $w=u-z$. Then $w$ satisfies: \begin{equation}\label{515} \left\{\begin{aligned} dw(t)&=[- Aw(t)+\beta (c(u(t))-c(z(t)))-\alpha B(u(t))]dt\\&\quad +[\sigma(t,u(t))-\sigma(t,z(t))]dW(t), \ t\in(0,T),\\ z(0)&=0. \end{aligned}\right. \end{equation} Applying infinite dimensional It\^o's formula to the process $\|w(\cdot)\|_{\mathrm{L}^2}^2$, we find \begin{align}\label{516} &e^{-\widehat{\rho}(t)}\|w(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^te^{-\widehat{\rho}(s)}\|\partial_x w(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&=\|w(0)\|_{\mathrm{L}^2}^2-\int_0^{t}\widehat{\rho}'(s)e^{-\widehat{\rho}(s)}\|w(s)\|_{\mathrm{L}^2}^2d s+2\beta\int_0^te^{-\widehat{\rho}(s)}(c(u(s))-c(z(s)),w(s))ds\nonumber\\&\quad-2\alpha\int_0^te^{-\widehat{\rho}(s)}(B(u(s)),w(s))ds +\int_0^{t}e^{-\widehat{\rho}(s)}\|\widehat\sigma(s)\|_{\mathcal{L}_{Q}}^2d s+2\int_0^{t}e^{-\widehat{\rho}(s)}\left(\widehat{\sigma}(s)dW(s),w(s)\right), \end{align} where $\widehat{\sigma}(\cdot)=\sigma(\cdot,u(\cdot))-\sigma(\cdot,z(\cdot))$ and \begin{align*} \widehat{\rho}(t)=C\alpha \int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2ds, \ \text{ so that }\ \widehat{\rho}'(t)=C\alpha\|u(t)\|_{\mathrm{H}_0^1}^2, \ \text{ a.e.} \end{align*} The above equality implies \begin{align}\label{517} &e^{-\widehat{\rho}(t)}\|w(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^te^{-\widehat{\rho}(s)}\| w(s)\|_{\mathrm{H}_0^1}^2ds\nonumber\\&=\|w(0)\|_{\mathrm{L}^2}^2-\int_0^{t}\widehat{\rho}'(s)e^{-\widehat{\rho}(s)}\|w(s)\|_{\mathrm{L}^2}^2d s-2\alpha\int_0^te^{-\widehat{\rho}(s)}(u(s)\partial_xw(s),w(s))ds \nonumber\\&\quad+2\beta(1+\gamma+\gamma^2)\int_0^te^{-\widehat{\rho}(s)}\|w(s)\|_{\mathrm{L}^2}^2ds +\int_0^{t}e^{-\widehat{\rho}(s)}\|\widehat\sigma(s)\|_{\mathcal{L}_{Q}}^2d s\nonumber\\&\quad+2\int_0^{t}e^{-\widehat{\rho}(s)}\left(\widehat{\sigma}(s)dW(s),w(s)\right). \end{align} We estimate the term $-2\alpha(u\partial_xw,w)$ using H\"older's and Young's inequalities as \begin{align}\label{518} -2\alpha(u\partial_xw,w)&\leq 2\alpha\|u\|_{\mathrm{L}^{\infty}}\|\partial_xu\|_{\mathrm{L}^2}\|w\|_{\mathrm{L}^2}\leq {C\alpha}\|u\|_{\mathrm{H}_0^1}^2(1+\|w\|_{\mathrm{L}^2}^2). \end{align} Using \eqref{518} in \eqref{517} and then taking expectation, we obtain \begin{align}\label{522} &\mathbb{E}\bigg[e^{-\widehat{\rho}(t)}\|w(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^te^{-\widehat{\rho}(s)}\| w(s)\|_{\mathrm{H}_0^1}^2ds\bigg]\nonumber\\&\leq C\alpha\mathbb{E}\left[\int_0^{T}\|u(t)\|_{\mathrm{H}_0^1}^2dt\right]+ 2\beta(1+\gamma+\gamma^2)\mathbb{E}\left[\int_0^{t}e^{-\widehat{\rho}(s)}\|w(s)\|_{\mathrm{L}^2}^2ds\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{t}e^{-\widehat{\rho}(s)}\|\widehat\sigma(s)\|_{\mathcal{L}_{Q}}^2d s\right]\nonumber\\&\leq C\alpha\mathbb{E}\left[\int_0^{T}\|u(t)\|_{\mathrm{H}_0^1}^2dt\right]+[2\beta(1+\gamma+\gamma^2)+L]\mathbb{E}\left[\int_0^{t}e^{-\widehat{\rho}(s)}\|w(s)\|_{\mathrm{L}^2}^2ds\right], \end{align} where we used Hypothesis \ref{hyp} (H.3). An application of Gronwall's inequality in \eqref{522} gives \begin{align}\label{524} &\mathbb{E}\left[e^{-\widehat{\rho}(t)}\|w(t)\|_{\mathrm{L}^2}^2\right]\leq C\alpha\mathbb{E}\left[\int_0^{T}\|u(t)\|_{\mathrm{H}_0^1}^2dt\right]e^{[2\beta(1+\gamma+\gamma^2)+L]T}. \end{align} Passing $\alpha\to 0$ in \eqref{524}, one can easily obtain that $u(t)\to z(t)$, for all $t\in[0,T]$, $\mathbb{P}$-a.s. \end{proof} \section{Large Deviations Principle and Exit Time Estimates}\label{sec6}\setcounter{equation}{0} In this section, we examine the small noise asymptotic granted by large deviations theory and use it to estimate the exit time estimates. We take the initial data $u_0\in\mathrm{L}^2(\mathcal{O})$ as deterministic. Let us first provide some basics definitions of large deviations theory. \subsection{Preliminaries} Let us denote by $\mathscr{E}$, a complete separable metric space (Polish space) with the Borel $\sigma$-field $\mathscr{B}(\mathscr{E})$. \begin{definition} A function $\mathrm{I} : \mathscr{E}\rightarrow [0, \infty]$ is called a \emph{rate function} if $\mathrm{I}$ is lower semicontinuous. A rate function $\mathrm{I}$ is called a \emph{good rate function} if for arbitrary $M \in [0, \infty)$, the level set $K_M = \big\{x\in\mathscr{E}: \mathrm{I}(x)\leq M\big\}$ is compact in $\mathscr{E}$. \end{definition} \begin{definition}[Large deviation principle]\label{LDP}\label{def4.2} Let $\mathrm{I}$ be a rate function on $\mathscr{E}$. A family $\big\{\mathrm{X}^{\varepsilon}: \varepsilon > 0\big\}$ of $\mathscr{E}$-valued random elements is said to satisfy \emph{the large deviation principle} on $\mathscr{E}$ with rate function $\mathrm{I}$, if the following two conditions hold: \begin{enumerate} \item[(i)] (Large deviation upper bound) For each closed set $\mathrm{F}\subset \mathscr{E}$: $$ \limsup_{\varepsilon\rightarrow 0} \varepsilon\log \mathbb{P}\left(\mathrm{X}^{\varepsilon}\in\mathrm{F}\right) \leq -\inf_{x\in \mathrm{F}} \mathrm{I}(x),$$ \item[(ii)] (Large deviation lower bound) for each open set $\mathrm{G}\subset \mathscr{E}$: $$ \liminf_{\varepsilon\rightarrow 0}\varepsilon \log \mathbb{P}(\mathrm{X}^{\varepsilon}\in\mathrm{G}) \geq -\inf_{x\in \mathrm{G}} \mathrm{I}(x).$$ \end{enumerate} \end{definition} \begin{definition} Let $\mathrm{I}$ be a rate function on $\mathscr{E}$. A family $\big\{\mathrm{X}^{\varepsilon} :\varepsilon > 0\big\}$ of $\mathscr{E}$-valued random elements is said to satisfy the \emph{Laplace principle} on $\mathscr{E}$ with rate function $\mathrm{I}$ if for each real-valued, bounded and continuous function $h$ defined on $\mathscr{E}$, i.e., for $h\in\mathrm{C}_b(\mathscr{E})$, \begin{equation}\label{LP} \lim_{\varepsilon \rightarrow 0} {\varepsilon }\log \mathbb{E}\left\{\exp\left[- \frac{1}{\varepsilon}h(\mathrm{X}^{\varepsilon})\right]\right\} = -\inf_{x \in \mathscr{E}} \big\{h(x) + \mathrm{I}(x)\big\}. \end{equation} \end{definition} \begin{lemma}[Varadhan's Lemma \cite{Va}]\label{VL} Let $\mathscr{E}$ be a Polish space and $\{\mathrm{X}^{\varepsilon}: \varepsilon > 0\}$ be a family of $\mathscr{E}$-valued random elements satisfying LDP with rate function $\mathrm{I}$. Then $\{\mathrm{X}^{\varepsilon}: \varepsilon > 0\}$ satisfies the Laplace principle on $\mathscr{E}$ with the same rate function $\mathrm{I}$. \end{lemma} \begin{lemma}[Bryc's Lemma \cite{DZ}]\label{BL} The Laplace principle implies the LDP with the same rate function. \end{lemma} Note that, Varadhan's Lemma together with Bryc's converse of Varadhan's Lemma state that for Polish space valued random elements, the Laplace principle and the large deviation principle are equivalent. The LDP for the 2D stochastic Navier-Stokes equations is established in \cite{SSSP}. Next Theorem shows that the LDP is preserved under continuous mappings, and is known as \emph{contraction principle}. \begin{theorem}[Contraction principle, Theorem 4.2.1, \cite{DZ}]\label{thm4.6} Let $\mathscr{E}$ and $\mathscr{G}$ be Hausdorff topological spaces and $f : \mathscr{E}\to \mathscr{G}$ a continuous function. Let us consider a good rate function $I : \mathscr{E}\to [0,\infty]$. \begin{enumerate} \item [(a)] For each $y \in\mathscr{E}'$, define \begin{align} \mathrm{J}(y):=\inf\left\{\mathrm{I}(x):x\in\mathscr{E},y=f(x)\right\}. \end{align} \item [(b)] If $\mathrm{I}$ controls the LDP associated with a family of probability measures $\{\mu_{\varepsilon}\}$ on $\mathscr{E}$, then $\mathrm{I}$ controls the LDP associated with the family of probability measures $\{\mu_{\varepsilon}\circ f^{-1}\}$ on $\mathscr{G}$. \end{enumerate} \end{theorem} Let us consider the following stochastic heat equation (see \cite{DaZ}): \begin{equation}\label{7p1} \left\{ \begin{aligned} dz(t)+ \nu Az(t)dt&=dW(t), \ t\in(0,T),\\ z(0)&=0. \end{aligned} \right. \end{equation} For $\mathop{\mathrm{Tr}}(Q)<\infty$, one can show that there exists a unique pathwise strong solution to the system \eqref{7p1} with trajectories in $\mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O}))$, $\mathbb{P}$-a.s. and satisfies the following energy estimate: \begin{align} \mathbb{E}\left[\sup_{t\in[0,T]}\|z(t)\|_{\mathrm{L}^2}^2+\nu\int_0^T\|z(s)\|_{\mathrm{H}_0^1}^2ds\right]\leq CT\mathop{\mathrm{Tr}}(Q). \end{align} Let us define $v=u-z$ and consider the system satisfied by $v$ as \begin{equation}\label{7.5} \left\{ \begin{aligned} dv(t)+\nu Av(t)dt&=-\alpha B(v(t)+z(t))dt+\beta c(v(t)+z(t))dt, \ t\in(0,T),\\ v(0)&=u_0\in\mathrm{L}^{2}(\mathcal{O}). \end{aligned} \right. \end{equation} Note that the randomness in the system \eqref{7.5} comes through $z(\cdot,\omega)$ and the system can be solved for each $\omega\in\Omega$ and one can show that the trajectories belong to $\mathrm{C}([0, T ]; \mathrm{L}^2(\mathcal{O}))\cap\mathrm{L}^2(0, T ; \mathrm{H}^1_0(\mathcal{O}))$, $\mathbb{P}$-a.s. Taking inner product with $v(\cdot)$ with the first equation in \eqref{7.5}, we find \begin{align}\label{7.6} &\frac{1}{2}\frac{d}{dt}\|v(t)\|_{\mathrm{L}^2}^2+\nu\|\partial_xv(t)\|_{\mathrm{L}^2}^2=-\alpha(B(v(t)+z(t)),v(t))+\beta(c(v(t)+z(t)),v(t)). \end{align} Using an integration by parts, \eqref{6}, H\"older's and Young's inequalities, it can be easily deduced that \begin{align}\label{7.7} -\alpha(B(v+z),v)&=-\alpha(v\partial_xv,v)-\alpha(z\partial_xv,v)-\alpha(v\partial_xz,v)-\alpha(z\partial_xz,v)\nonumber\\&=2\alpha(z,v\partial_xv)-\alpha(z\partial_xz,v)\nonumber\\&\leq 2\alpha\|z\|_{\mathrm{L}^{\infty}}\|v\|_{\mathrm{L}^2}\|\partial_xv\|_{\mathrm{L}^2}+\alpha\|z\|_{\mathrm{L}^{\infty}}\|\partial_xz\|_{\mathrm{L}^2}\|v\|_{\mathrm{L}^2}\nonumber\\&\leq\frac{\nu}{2}\|\partial_xv\|_{\mathrm{L}^2}^2+\frac{2C\alpha}{\nu}\|z\|_{\mathrm{H}_0^1}^2\|v\|_{\mathrm{L}^2}^2+C\alpha(1+\|v\|_{\mathrm{L}^2}^2)\|z\|_{\mathrm{H}_0^1}^2. \end{align} Similarly, we estimate $\beta(c(v+z),v)$ as \begin{align}\label{7.8} \beta(c(v+z),v)&=-\beta\gamma\|v\|_{\mathrm{L}^2}^2-\beta\|v\|_{\mathrm{L}^4}^4+\beta(1+\gamma)(v^2,v)+2\beta(1+\gamma)(vz,v)\nonumber\\&\quad+\beta(1+\gamma)(z^2,v)-\beta\gamma(z,v)-3\beta(v^2z,v)-3\beta(vz^2,v)-\beta(z^3,v). \end{align} Let us compute each term appearing the right hand side of the equality \eqref{7.8} as \begin{align} \beta(1+\gamma)|(v^2,v)|&\leq\beta(1+\gamma)\|v\|_{\mathrm{L}^2}\|v\|_{\mathrm{L}^4}^2\leq \frac{\beta}{4}\|v\|_{\mathrm{L}^4}^4+\beta(1+\gamma)^2\|v\|_{\mathrm{L}^2}^2,\\ 2\beta(1+\gamma)|(vz,v)|&\leq 2\beta(1+\gamma)\|z\|_{\mathrm{L}^{\infty}}\|v\|_{\mathrm{L}^2}^2\leq C\beta(1+\gamma)(1+\|z\|_{\mathrm{H}_0^1}^2)\|v\|_{\mathrm{L}^2}^2,\\ \beta(1+\gamma)|(z^2,v)|&\leq\beta(1+\gamma)\|z\|_{\mathrm{L}^2}\|z\|_{\mathrm{L}^{\infty}}\|v\|_{\mathrm{L}^2}\leq C\beta(1+\gamma)\|z\|_{\mathrm{H}_0^1}^2(1+\|v\|_{\mathrm{L}^2}^2),\\ \beta\gamma|(z,v)|&\leq\beta\gamma\|z\|_{\mathrm{L}^2}\|v\|_{\mathrm{L}^2}\leq\frac{\beta\gamma}{2}\|z\|_{\mathrm{L}^2}^2+\frac{\beta\gamma}{2}\|v\|_{\mathrm{L}^2}^2,\\ 3\beta|(v^2z,v)|&\leq\|z\|_{\mathrm{L}^{\infty}}\|v\|_{\mathrm{L}^4}^2\|v\|_{\mathrm{L}^2}\leq\frac{\beta}{4}\|v\|_{\mathrm{L}^4}^4+9C\beta\|z\|_{\mathrm{H}_0^1}^2\|v\|_{\mathrm{L}^2}^2,\\ 3\beta|(vz^2,v)|&\leq 3\beta\|z\|_{\mathrm{L}^{\infty}}^2\|v\|_{\mathrm{L}^2}^2\leq 3C\beta\|z\|_{\mathrm{H}_0^1}^2\|v\|_{\mathrm{L}^2}^2,\\ \beta|(z^3,v)|&\leq\beta\|z\|_{\mathrm{L}^6}^3\|v\|_{\mathrm{L}^2}\leq C\beta\|z\|_{\mathrm{H}_0^1}\|z\|_{\mathrm{L}^2}^2\|v\|_{\mathrm{L}^2}\leq\beta\|z\|_{\mathrm{H}_0^1}^2+\frac{C\beta}{4}\|z\|_{\mathrm{L}^2}^4\|v\|_{\mathrm{L}^2}^2. \label{7.15} \end{align} Combining \eqref{7.7}-\eqref{7.15}, substituting it in \eqref{7.6} and then integrating from $0$ to $t$, we get \begin{align}\label{7.16} &\|v(t)\|_{\mathrm{L}^2}^2+\nu\int_0^t\|\partial_xv(s)\|_{\mathrm{L}^2}^2ds+\beta\int_0^t\|v(s)\|_{\mathrm{L}^4}^4ds\nonumber\\&\leq\|v_0\|_{\mathrm{L}^2}^2+C(\alpha+\beta)\int_0^t\|z(s)\|_{\mathrm{H}_0^1}^2ds+\frac{\beta\gamma}{2}\int_0^t\|z(s)\|_{\mathrm{L}^2}^2ds\\&\quad +C\int_0^t\bigg\{\left[\alpha\left(\frac{1}{\nu}+1\right)+\beta(1+\gamma)\right]\|z(s)\|_{\mathrm{H}_0^1}^2+\beta\|z(s)\|_{\mathrm{L}^2}^4+\beta(1+\gamma+\gamma^2)\bigg\}\|v(s)\|_{\mathrm{L}^2}^2ds,\nonumber \end{align} for all $0\leq t\leq T$. An application of Gronwall's inequality in \eqref{7.16} yields \begin{align}\label{7.17} &\|v(t)\|_{\mathrm{L}^2}^2+\nu\int_0^t\|\partial_xv(s)\|_{\mathrm{L}^2}^2ds+\beta\int_0^t\|v(s)\|_{\mathrm{L}^4}^4ds\nonumber\\&\leq\left\{\|u_0\|_{\mathrm{L}^2}^2+C\left(\alpha+\beta\right)\int_0^T\|z(t)\|_{\mathrm{H}_0^1}^2dt\right\}\nonumber\\&\quad\times\exp\bigg\{C\left[\alpha\left(\frac{1}{\nu}+1\right)+\beta(1+\gamma)+\beta\sup_{t\in[0,T]}\|z(t)\|_{\mathrm{L}^2}^2\right]\int_0^T\|z(t)\|_{\mathrm{H}_0^1}^2dt+C\left[\beta(1+\gamma+\gamma^2)\right]t\bigg\}, \end{align} for all $t\in[0,T]$. \begin{lemma}\label{lem7.7} Let a function $\psi\in\mathscr{E}:=\mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O}))$ be given. Let the map \begin{align}\label{718}\Psi:\psi\mapsto v_{\psi}\end{align} be defined by \begin{equation}\label{7.18} \left\{ \begin{aligned} dv_{\psi}(t)+\nu Av_{\psi}(t)dt&=-\alpha B(v_{\psi}(t)+\psi(t))dt+\beta c(v_{\psi}(t)+\psi(t))dt, \ t\in(0,T),\\ v_{\psi}(0)&=u_0\in\mathrm{L}^{2}(\mathcal{O}). \end{aligned} \right. \end{equation} Then the map $\Psi$ is continuous from the space $\mathscr{E}$ to $\mathscr{E}$. \end{lemma} \begin{proof} Let us consider two functions $\psi_1$ and $\psi_2$ in $\mathscr{E}$. We denote the corresponding solutions of \eqref{7.18} as $v_{\psi_i}$, for $i=1,2$. Then $v_{\psi}=v_{\psi_1}-v_{\psi_2}$, where $\psi=\psi_1-\psi_2$, satisfies: \begin{equation}\label{7.19} \left\{ \begin{aligned} dv_{\psi}(t)+\nu Av_{\psi}(t)dt&=-\alpha \left[B(v_{\psi_1}(t)+\psi_1(t))-B(v_{\psi_2}(t)+\psi_2(t))\right]dt\\&\quad+\beta \left[c(v_{\psi_1}(t)+\psi_1(t))-c(v_{\psi_2}(t)+\psi_2(t))\right]dt, \ t\in(0,T),\\ v_{\psi}(0)&=0. \end{aligned} \right. \end{equation} Taking inner product with $v_{\psi}$ to the first equation in \eqref{7.19} to find \begin{align}\label{7.20} \frac{1}{2}\frac{d}{dt}\|v_{\psi}(t)\|_{\mathrm{L}^2}^2+\nu\|\partial_xv_{\psi}(t)\|_{\mathrm{L}^2}^2&= -\alpha(B(v_{\psi_1}(t)+\psi_1(t))-B(v_{\psi_2}(t)+\psi_2(t)),v_{\psi})\nonumber\\&\quad+\beta (c(v_{\psi_1}(t)+\psi_1(t))-c(v_{\psi_2}(t)+\psi_2(t)),v_{\psi}). \end{align} We estimate $-\alpha(B(v_{\psi_1}+\psi_1)-B(v_{\psi_2}+\psi_2),v_{\psi})$ as \begin{align}\label{7.21} -\alpha(B(v_{\psi_1}+\psi_1)-B(v_{\psi_2}+\psi_2),v_{\psi})&=-\alpha(v_{\psi}\partial_x(v_{\psi_1}+\psi_1),v_{\psi})-\alpha(\psi\partial_x(v_{\psi_1}+\psi_1),v_{\psi})\nonumber\\&\quad-\alpha((v_{\psi_2}+\psi_2)\partial_xv_{\psi},v_{\psi})-\alpha((v_{\psi_2}+\psi_2)\partial_x\psi,v_{\psi})\nonumber\\&=:\sum_{i=1}^4I_i, \end{align} where $I_i$, for $i=1,\ldots,4$ are the terms appearing in the right hand side of the equality \eqref{7.21}. We estimate $I_i$, for $i=1,\ldots,4$ using H\"older's and Young's inequalities as \begin{align} |I_1|&=|\alpha(v_{\psi_1}+\psi_1,v_{\psi}\partial_xv_{\psi})|\leq\alpha\|v_{\psi_1}+\psi_1\|_{\mathrm{L}^{\infty}}\|v_{\psi}\|_{\mathrm{L}^2}\|\partial_xv_{\psi}\|_{\mathrm{L}^2}\nonumber\\&\leq\frac{\nu}{4}\|\partial_xv_{\psi}\|_{\mathrm{L}^2}^2+\frac{C}{\nu}\left(\|v_{\psi_1}\|_{\mathrm{H}_0^1}^2+\|\psi_1\|_{\mathrm{H}_0^1}^2\right)\|v_{\psi}\|_{\mathrm{L}^2}^2,\\ |I_2|&\leq \alpha\|\psi\|_{\mathrm{L}^{\infty}}\|\partial_x(v_{\psi_1}+\psi_1)\|_{\mathrm{L}^2}\|v_{\psi}\|_{\mathrm{L}^2}\leq C\|\psi\|_{\mathrm{H}_0^1}^2+\frac{\alpha^2}{2}\left(\|v_{\psi_1}\|_{\mathrm{H}_0^1}^2+\|\psi_1\|_{\mathrm{H}_0^1}^2\right)\|v_{\psi}\|_{\mathrm{L}^2}^2,\\ |I_3|&\leq\frac{\nu}{4}\|\partial_xv_{\psi}\|_{\mathrm{L}^2}^2+\frac{C}{\nu}\left(\|v_{\psi_2}\|_{\mathrm{H}_0^1}^2+\|\psi_2\|_{\mathrm{H}_0^1}^2\right)\|v_{\psi}\|_{\mathrm{L}^2}^2,\\ |I_4|&\leq\alpha\|v_{\psi_2}+\psi_2\|_{\mathrm{L}^{\infty}}\|\partial_x\psi\|_{\mathrm{L}^2}\|v_{\psi}\|_{\mathrm{L}^2}\leq\|\psi\|_{\mathrm{H}_0^1}^2+\frac{C\alpha^2}{2}\left(\|v_{\psi_2}\|_{\mathrm{H}_0^1}^2+\|\psi_2\|_{\mathrm{H}_0^1}^2\right)\|v_{\psi}\|_{\mathrm{L}^2}^2. \end{align} We estimate $\beta (c(v_{\psi_1}+\psi_1)-c(v_{\psi_2}+\psi_2),v_{\psi})$ as \begin{align}\label{7.26} &\beta (c(v_{\psi_1}+\psi_1)-c(v_{\psi_2}+\psi_2),v_{\psi})\nonumber\\&= \beta(((v_{\psi_1}+\psi_1)-(v_{\psi_2}+\psi_2))[(1+\gamma)(v_{\psi_1}+\psi_1+v_{\psi_2}+\psi_2)\nonumber\\&\qquad-(\gamma+(v_{\psi_1}+\psi_1)^2+(v_{\psi_1}+\psi_1)(v_{\psi_1}+\psi_1)+(v_{\psi_2}+\psi_2)^2)],v_{\psi})\nonumber\\&=\beta(v_{\psi}[(1+\gamma)(v_{\psi_1}+\psi_1+v_{\psi_2}+\psi_2)\nonumber\\&\qquad-(\gamma+(v_{\psi_1}+\psi_1)^2+(v_{\psi_1}+\psi_1)(v_{\psi_1}+\psi_1))+(v_{\psi_2}+\psi_2)^2],v_{\psi})\nonumber\\&\quad+\beta(\psi[(1+\gamma)(v_{\psi_1}+\psi_1+v_{\psi_2}+\psi_2)\nonumber\\&\qquad-(\gamma+(v_{\psi_1}+\psi_1)^2+(v_{\psi_1}+\psi_1)(v_{\psi_1}+\psi_1)+(v_{\psi_2}+\psi_2)^2)],v_{\psi})\nonumber\\&=-\beta\gamma\|v_{\psi}\|_{\mathrm{L}^2}^2+\beta(1+\gamma)(v_{\psi}(v_{\psi_1}+\psi_1+v_{\psi_2}+\psi_2),v_{\psi})\nonumber\\&\quad-\beta(v_{\psi}((v_{\psi_1}+\psi_1)^2+(v_{\psi_1}+\psi_1)(v_{\psi_1}+\psi_1)+(v_{\psi_2}+\psi_2)^2),v_{\psi})\nonumber\\&\quad+\beta(1+\gamma)(\psi(v_{\psi_1}+\psi_1+v_{\psi_2}+\psi_2),v_{\psi})-\beta\gamma(\psi,v_{\psi})\nonumber\\&\quad-\beta(\psi((v_{\psi_1}+\psi_1)^2+(v_{\psi_1}+\psi_1)(v_{\psi_1}+\psi_1)+(v_{\psi_2}+\psi_2)^2),v_{\psi})=:\sum_{i=5}^9I_i, \end{align} where $I_i$, for $i=5,\ldots,8$ are the final five terms appearing in the right hand side of the equality \eqref{7.26}. We estimate $I_i$, for $i=5,\ldots,8$ using H\"older's and Young's inequalities as \begin{align} |I_5|&\leq\beta(1+\gamma)\|v_{\psi_1}+\psi_1+v_{\psi_2}+\psi_2\|_{\mathrm{L}^{\infty}}\|v_{\psi}\|_{\mathrm{L}^2}^2\nonumber\\&\leq C\beta(1+\gamma)\left(\|v_{\psi_1}\|_{\mathrm{H}_0^1}+\|\psi_1\|_{\mathrm{H}_0^1}+\|v_{\psi_2}\|_{\mathrm{H}_0^1}+\|\psi_2\|_{\mathrm{H}_0^1}\right)\|v_{\psi}\|_{\mathrm{L}^2}^2,\\ |I_6|&\leq\beta\|(v_{\psi_1}+\psi_1)^2+(v_{\psi_1}+\psi_1)(v_{\psi_1}+\psi_1)+(v_{\psi_2}+\psi_2)^2\|_{\mathrm{L}^{\infty}}\|v_{\psi}\|_{\mathrm{L}^2}^2\nonumber\\&\leq C\beta\left(\|v_{\psi_1}\|_{\mathrm{H}_0^1}^2+\|\psi_1\|_{\mathrm{H}_0^1}^2+\|v_{\psi_2}\|_{\mathrm{H}_0^1}^2+\|\psi_2\|_{\mathrm{H}_0^1}^2\right)\|v_{\psi}\|_{\mathrm{L}^2}^2,\\ |I_7|&\leq \beta(1+\gamma)\|\psi\|_{\mathrm{L}^2}\|v_{\psi_1}+\psi_1+v_{\psi_2}+\psi_2\|_{\mathrm{L}^{\infty}}\|v_{\psi}\|_{\mathrm{L}^2}\nonumber\\&\leq\frac{\beta(1+\gamma)}{2}\|\psi\|_{\mathrm{L}^2}^2+C\beta(1+\gamma)\left(\|v_{\psi_1}\|_{\mathrm{H}_0^1}^2+\|\psi_1\|_{\mathrm{H}_0^1}^2+\|v_{\psi_2}\|_{\mathrm{H}_0^1}^2+\|\psi_2\|_{\mathrm{H}_0^1}^2\right)\|v_{\psi}\|_{\mathrm{L}^2}^2,\\ |I_8|&\leq\beta\gamma\|\psi\|_{\mathrm{L}^2}\|v_{\psi}\|_{\mathrm{L}^2}\leq\frac{\beta\gamma}{2}\|\psi\|_{\mathrm{L}^2}^2+\frac{\beta\gamma}{2}\|v_{\psi}\|_{\mathrm{L}^2}^2,\\ |I_9|&\leq\beta\|\psi\|_{\mathrm{L}^2}\|(v_{\psi_1}+\psi_1)^2+(v_{\psi_1}+\psi_1)(v_{\psi_1}+\psi_1)+(v_{\psi_2}+\psi_2)^2\|_{\mathrm{L}^{\infty}}\|v_{\psi}\|_{\mathrm{L}^2}\nonumber\\&\leq\frac{\beta}{2}\|\psi\|_{\mathrm{L}^2}^2+C\beta\left(\|v_{\psi_1}\|_{\mathrm{H}_0^1}^2+\|\psi_1\|_{\mathrm{H}_0^1}^2+\|v_{\psi_2}\|_{\mathrm{H}_0^1}^2+\|\psi_2\|_{\mathrm{H}_0^1}^2\right)\|v_{\psi}\|_{\mathrm{L}^2}^2.\label{7.31} \end{align} Combining \eqref{7.21}-\eqref{7.31}, substituting it in \eqref{7.20} and then integrating it from $0$ to $t$ to get \begin{align}\label{7.32} &\|v_{\psi}(t)\|_{\mathrm{L}^2}^2+\nu\int_0^t\|\partial_xv_{\psi}(s)\|_{\mathrm{L}^2}^2ds+2\beta\gamma\int_0^t\|v_{\psi}(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&\leq C\left(\frac{1}{\nu}+\alpha^2+\beta(1+\gamma)\right)\int_0^t\left(\|v_{\psi_1}(s)\|_{\mathrm{H}_0^1}^2+\|\psi_1(s)\|_{\mathrm{H}_0^1}^2+\|v_{\psi_2}(s)\|_{\mathrm{H}_0^1}^2+\|\psi_2(s)\|_{\mathrm{H}_0^1}^2\right)\|v_{\psi}(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&\quad+C\int_0^t\|\psi(s)\|_{\mathrm{H}_0^1}^2ds+\frac{\beta}{2}(1+\gamma)\int_0^t\|\psi(s)\|_{\mathrm{L}^2}^2ds+C\beta(1+\gamma)\int_0^t\|v_{\psi}(s)\|_{\mathrm{L}^2}^2ds. \end{align} An application of Gronwall's inequality in \eqref{7.32} gives \begin{align}\label{7.33} &\|v_{\psi}(t)\|_{\mathrm{L}^2}^2+\nu\int_0^t\|\partial_xv_{\psi}(s)\|_{\mathrm{L}^2}^2ds+2\beta\gamma\int_0^t\|v_{\psi}(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&\leq C\int_0^t\|\psi(s)\|_{\mathrm{H}_0^1}^2ds+\frac{\beta}{2}(1+\gamma)\int_0^t\|\psi(s)\|_{\mathrm{L}^2}^2ds\\&\quad\times\exp\left\{C\beta(1+\gamma)t+\int_0^t\left(\|v_{\psi_1}(s)\|_{\mathrm{H}_0^1}^2+\|\psi_1(s)\|_{\mathrm{H}_0^1}^2+\|v_{\psi_2}(s)\|_{\mathrm{H}_0^1}^2+\|\psi_2(s)\|_{\mathrm{H}_0^1}^2\right)ds\right\},\nonumber \end{align} for all $t\in[0,T]$. Let us now take $\psi_n\to\psi$ in $\mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O})),$ as $n\to\infty$. From \eqref{7.17}, it is clear that $$\sup_{t\in[0,T]}\|v_n(t)\|_{\mathrm{L}^2}^2+\nu\int_0^T\|v_n(t)\|_{\mathrm{H}_0^1}^2dt,$$ is bounded uniformly and independent of $n$. Thus, from \eqref{7.33}, it is immediate that $v_{\psi_n}\to v_{\psi}$ in $\mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O}))$ and hence the continuity of the map $\Psi$ follows. \end{proof} Let $\mathrm{U}_0:=\mathrm{Q}^{\frac{1}{2}}\mathrm{L}^2(\mathcal{O})$ and $\|\cdot\|_0$ denotes the norm on $\mathrm{U}_0$ and $\mathcal{G}_0\left(\int_0^{\cdot}h(s)d s\right)$ is the set of solutions of the equation: \begin{equation}\label{4.22} \left\{ \begin{aligned} dz(t)+ \nu Az(t)dt&=\sqrt{\varepsilon}h(t)dt, \ t\in(0,T),\\ z(0)&=0. \end{aligned} \right. \end{equation} \begin{theorem}\label{thm4.8} Let $\Theta$ maps from $\mathscr{E}$ to $\mathscr{E}$ and is given by \begin{align} \Theta(z)=z+\Psi(z), \end{align} where the map $\Psi(\cdot)$ is defined in \eqref{718} and \eqref{7.18}. For any given $R>0$ and $\delta >0$, there exists a large positive constant $\varrho_0$ such that for all $\varrho_0$, if we define the set $A_{\varrho}:=\Theta(\varrho\Theta^{-1}(B_R^c)),$ then the unique pathwise strong solution $u(\cdot)$ of the system \eqref{abstract} satisfies: \begin{align}\label{4.24} \mathbb{P}\left\{u\in A_{\varrho} \right\}\leq \exp\left\{-\varrho^2(\mathrm{J}(B_R^c)-\delta)\right\}, \end{align} where \begin{align*}B_R:=\bigg\{&v\in \mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O})): \sup_{0\leq t\leq T}\|v(t)\|_{\mathrm{L}^2}^2<R \bigg\},\end{align*} \begin{align} \mathrm{J}(B_R^c)=\inf_{x\in\Theta^{-1}(B_R^c)}\mathrm{I}(x), \end{align} and \begin{align}\label{4.26} \mathrm{I}(x)=\inf_{h\in\mathrm{L}^2(0,T;\mathrm{U}_0):\ x\in\mathcal{G}_0\left(\int_0^{\cdot}h(s)d s\right)}\left\{\frac{1}{2}\int_0^T\|h(t)\|_0^2d t \right\}. \end{align} \end{theorem} \begin{proof} For each $h\in\mathrm{L}^2(0,T;\mathrm{U}_0)$, we us use the notation $\mathcal{G}_0\left(\int_0^{\cdot}h(s)d s\right)$ for the set of solutions of the equation \eqref{4.22}. For each $\varepsilon>0$, let $z^{\varepsilon}(\cdot)$ denotes the unique pathwise strong solution of the stochastic heat equation: \begin{equation}\label{4.23} \left\{ \begin{aligned} dz^{\varepsilon}(t)+ \nu Az^{\varepsilon}(t)dt&=\sqrt{\varepsilon}dW(t), \ t\in(0,T),\\ z^{\varepsilon}(0)&=0. \end{aligned} \right. \end{equation} Then $z^{\varepsilon}(t)=\sqrt{\varepsilon}\int_0^tR{(t-s)}dW(s)=\sqrt{\varepsilon}z(t)$, where $R(\cdot)$ is the heat semigroup and $z(\cdot)$ is the unique pathwise strong solution of the system \eqref{7p1}. Note that (see section 12.3, \cite{DaZ}, \cite{SSSP}) the large deviations rate function for the family $z^{\varepsilon}$ is given by \begin{align} \mathrm{I}(x)=\inf_{h\in\mathrm{L}^2(0,T;\mathrm{U}_0):\ x\in\mathcal{G}_0\left(\int_0^{\cdot}h(s)d s\right)}\left\{\frac{1}{2}\int_0^T\|h(t)\|_0^2d t \right\}. \end{align} Let us now define the map $\Theta$ from $\mathscr{E}$ to $\mathscr{E}$ by $ \Theta(z)=z+\Psi(z), $ where the map $\Psi(\cdot)$ is defined in \eqref{718} and \eqref{7.18}. Clearly, the map $\Theta$ is continuous by using Lemma \ref{lem7.7} and \begin{align}\label{4.03} u^{\varepsilon}=\Theta(z^{\varepsilon})=\Theta(\sqrt{\varepsilon}z),\end{align} where $u^{\varepsilon}$ satisfies: \begin{equation}\label{4.27} \left\{ \begin{aligned} du^{\varepsilon}(t)&=[- Au^{\varepsilon}(t)-\alpha B(u^{\varepsilon}(t))+\beta c(u^{\varepsilon}(t))]dt+\sqrt{\varepsilon} dW(t), \ t\in(0,T),\\ u(0)&=u_0\in\mathrm{L}^{2}(\mathcal{O}). \end{aligned} \right. \end{equation} Then, using Contraction principle (see Theorem \ref{thm4.6}), we deduce that the family $u^{\varepsilon}$ satisfies the large deviation principle with the rate function: \begin{align} \mathrm{J}(A)=\inf_{x\in\Theta^{-1}(A)}\mathrm{I}(x), \end{align} for any Borel set $A\in \mathscr{E}$, where $\Theta^{-1}(A)=\left\{x\in\mathscr{E}:\Theta(x)\in A\right\}.$ Thus, using the LDP (see Definition \ref{def4.2} (i)), we have \begin{align} \limsup_{\varepsilon\to 0}\varepsilon\log \mathbb{P}\left\{u^{\varepsilon}\in B_R^c\right\}\leq -\mathrm{J}(B_R^c), \end{align} where $B_R$ is an open ball in $\mathscr{E}$ with center zero and radius $R>0$. Thus, for any $\delta>0$, there exists an $\varepsilon_1>0$ such that for all $0<\varepsilon<\varepsilon_1$, we have \begin{align*} \varepsilon\log \mathbb{P}\left\{u^{\varepsilon}\in B_R^c\right\}\leq -\mathrm{J}(B_R^c)+\delta. \end{align*} The above inequality easily gives \begin{align}\label{4.30} \mathbb{P}\left\{u^{\varepsilon}\in B_R^c\right\}\leq \exp\left\{-\frac{1}{\varepsilon}(\mathrm{J}(B_R^c)-\delta)\right\}. \end{align} From \eqref{4.30}, it is clear that \begin{align}\label{4.31} \mathbb{P}\left\{z\in\frac{1}{\sqrt{\varepsilon}}\Theta^{-1}(B_R^c)\right\}\leq \exp\left\{-\frac{1}{\varepsilon}(\mathrm{J}(B_R^c)-\delta)\right\}, \end{align} using \eqref{4.03}. Let us denote the set $A$ to be $\Theta\left(\frac{1}{\sqrt{\varepsilon}}\Theta^{-1}(B_R^c)\right)$ and from \eqref{4.31}, we infer that \begin{align}\label{4.32} \mathbb{P}\left\{z\in A\right\}\leq \exp\left\{-\frac{1}{\varepsilon}(\mathrm{J}(B_R^c)-\delta)\right\}, \end{align} since $u=\Theta(z)$, which completes the proof. \end{proof} \begin{remark} If we take $\varrho_0=1$, then the set $A_1$ becomes $B_R^c$, and from \eqref{4.24}, we deuce that \begin{align}\label{4.36} \mathbb{P}\left\{u\in B_R^c \right\}\leq \exp\left\{-\varrho^2(\mathrm{J}(B_R^c)-\delta)\right\}, \end{align} which gives the rate of decay as $\mathrm{J}(B_R^c)$. Moreover, if one can assure the existence of an $R >0$ such that $B_R^c\subseteq A_{\varrho_0}$, then also the Theorem \ref{thm4.8} leads to \eqref{4.36}. \end{remark} \begin{remark}\label{rem5.10} An application of It\^o's formula to the process $\|u(\cdot)\|_{\mathrm{L}^2}^2$ yields \begin{align}\label{5p50} &\|u(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s+2\beta\gamma\int_0^t\|u(s)\|_{\mathrm{L}^2}^2ds+2\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds\nonumber\\&=\|u_0\|_{\mathrm{L}^2}^2+2\beta(1+\gamma)\int_0^t(u(s)^2,u(s))ds+\int_0^t\mathop{\mathrm{Tr}}(\Phi Q\Phi^*) ds+2\int_0^t(\Phi d W(s),u(s))\nonumber\\&\leq \|u_0\|_{\mathrm{L}^2}^2+\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds+\beta(1+\gamma)^2\int_0^t\|u(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&\quad+\eta(t)+2\int_0^t\|Q^{1/2}u(s)\|_{\mathrm{L}^2}^2ds+t\mathop{\mathrm{Tr}}(Q), \end{align} where \begin{align}\label{5p51} \eta(t)=2\int_0^t(d W(s),u(s))-2\int_0^t\|Q^{1/2}u(s)\|_{\mathrm{L}^2}^2ds. \end{align} But we know that \begin{align} \|Q^{1/2}u\|_{\mathrm{L}^2}\leq\|Q^{1/2}\|_{\mathcal{L}(\mathrm{L}^2)}\|u\|_{\mathrm{L}^2}\leq\mathop{\mathrm{Tr}}(Q)^{1/2}\|u\|_{\mathrm{L}^2}. \end{align} Thus, from \eqref{5p51}, it is immediate that \begin{align} &\|u(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s+\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds\nonumber\\&\leq \|u_0\|_{\mathrm{L}^2}^2+\mathop{\mathrm{Tr}}(Q) t+\eta(t)+\beta(1+\gamma^2)\int_0^t\|u(s)\|_{\mathrm{L}^2}^2ds+2\mathop{\mathrm{Tr}}(Q)\int_0^t\|u(s)\|_{\mathrm{L}^2}^2ds \end{align} An application of Gronwall's inequality yields \begin{align} \|u(t)\|_{\mathrm{L}^2}^2&\leq \left(\|u_0\|_{\mathrm{L}^2}^2+\mathop{\mathrm{Tr}}(Q) t+\eta(t)\right)+\int_0^t \left(\|u_0\|_{\mathrm{L}^2}^2+\mathop{\mathrm{Tr}}(Q) s+\eta(s)\right)\left(\beta(1+\gamma^2)+2\mathop{\mathrm{Tr}}(Q)\right)\nonumber\\&\qquad\times\exp\left(\int_s^t\left(\beta(1+\gamma^2)+2\mathop{\mathrm{Tr}}(Q)\right)dr\right)ds. \end{align} Taking supremum over both sides in the above inequality, we find \begin{align} & \sup_{t\in[0,T]}\|u(t)\|_{\mathrm{L}^2}^2\leq \left(\|u_0\|_{\mathrm{L}^2}^2+\mathop{\mathrm{Tr}}(Q) T+\sup_{t\in[0,T]}\eta(t)\right)M\exp\left(MT\right), \end{align} where $M= \beta(1+\gamma^2)+2\mathop{\mathrm{Tr}}(Q)$. For fixed $R>0$, we have \begin{align}\label{5p56} \mathbb{P}\left\{ \sup_{t\in[0,T]}\|u(t)\|_{\mathrm{L}^2}>R\right\}& \leq \mathbb{P}\left\{\left(\|u_0\|_{\mathrm{L}^2}^2+\mathop{\mathrm{Tr}}(Q) T+\sup_{t\in[0,T]}\eta(t)\right)Me^{MT}>R^2\right\}\nonumber\\&=\mathbb{P}\left\{\sup_{t\in[0,T]}\eta(t)>\frac{R^2}{M}e^{-MT}-\left(\|u_0\|_{\mathrm{L}^2}^2+\mathop{\mathrm{Tr}}(Q)T\right)\right\}\nonumber\\&=\mathbb{P}\left\{\sup_{t\in[0,T]}\exp(\eta(t))>\exp\left[\frac{R^2}{M}e^{-MT}-\left(\|u_0\|_{\mathrm{L}^2}^2+\mathop{\mathrm{Tr}}(Q)T\right)\right]\right\}\nonumber\\&\leq \exp\left[\left(\|u_0\|_{\mathrm{L}^2}^2+\mathop{\mathrm{Tr}}(Q)T\right)\right]e^{-\frac{R^2}{M}e^{-MT}}, \end{align} where we used Doob’s martingale inequality. From the expression \eqref{5p56}, we know that the rate of decay is of the order of $R^2$. We can also follow the same procedure as in the Theorem \ref{thm4.8} to get a similar result. Let us define the set \begin{align} \mathrm{F}_R:=\left\{x:\mathrm{J}(x)\leq R^2 \right\}, \end{align} for $R> 0$ and define the set $\mathrm{G}_R$ as any open neighborhood of $\mathrm{F}_R$. Then for any given any $\delta>0$, there exists an $\varepsilon_1 > 0$ such that for all $0<\varepsilon < \varepsilon_1$, from \eqref{4.30}, we have \begin{align} \mathbb{P}\left\{u^{\varepsilon}\in\mathrm{G}_R^c\right\}\leq \exp\left\{-\frac{1}{\varepsilon}(\mathrm{J}(\mathrm{G}_R^c)-\delta)\right\}\leq \exp\left\{-\frac{1}{\varepsilon}(R^2-\delta)\right\}, \end{align} using the definition of the set $\mathrm{G}_R$. Hence, using \eqref{4.03}, it is immediate that \begin{align} \mathbb{P}\left\{u\in\Psi\left(\frac{1}{\sqrt{\varepsilon}}\Psi^{-1}(\mathrm{G}_R^c)\right)\right\}\leq \exp\left\{-\frac{1}{\varepsilon}(R^2-\delta)\right\}. \end{align} \end{remark} \section{Exponential Moments, Invariant Measures and Ergodicity}\label{sec9}\setcounter{equation}{0} Let us now discuss the existence and uniqueness of invariant measures and ergodicity results for the stochastic Burgers-Huxley equation \eqref{abstract} with additive Gaussian noise. We show that there exists a unique invariant measure for the Markovian transition probability associated to the system (\ref{abstract}) by making use of exponential stability of solutions. \subsection{Exponential moments and stability} In this subsection, we establish the exponential stability of the Burgers-Huxley equation perturbed by additive Gaussian noise. That is, we are considering (\ref{2.1}) with additive noise. Thus $u(\cdot)$ satisfies: \begin{equation}\label{6.1a} \left\{ \begin{aligned} du(t)&=[- \nu Au(t)-\alpha B(u(t))+\beta c(u(t))]dt+dW(t), \ t\in(0,T),\\ u(0)&=u_0, \end{aligned} \right. \end{equation} where $u_0\in\mathrm{L}^2(\mathcal{O})$ and $\mathrm{W}(\cdot)$ is an $\mathrm{L}^2(\mathcal{O})$-valued $Q$-Wiener process with $\mathop{\mathrm{Tr}}(Q)<\infty$. Since $\mathop{\mathrm{Tr}}(Q)<\infty$, the existence and uniqueness of strong solution to the system (\ref{6.1a}) follows from the Theorem \ref{exis}. Thus, we know that the system \eqref{abstract} has a unique strong solution with paths in $\mathrm{C}([0,T];\mathrm{L}^2(\mathcal{O}))\cap\mathrm{L}^2(0,T;\mathrm{H}_0^1(\mathcal{O}))$, $\mathbb{P}$-a.s. Then, we have the following Theorem on the exponential moments of the system \eqref{6.1a}. \begin{theorem}\label{expe} Let $u(\cdot)$ be a unique strong solution of the problem (\ref{6.1a}) such that for \begin{align}\label{7.2}0<\varepsilon\leq \frac{\left(\nu\pi^2-{\beta(1+\gamma^2)}\right)}{2\|Q\|_{\mathcal{L}(\mathrm{L}^2)}}\ \text{ and }\ \nu > \frac{\beta(1+\gamma^2)}{\pi^2},\end{align} we have \begin{align}\label{5.68} \mathbb{E}\left[\exp\left(\varepsilon\|u(t)\|_{\mathrm{L}^2}^2+\varepsilon\nu\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s+\varepsilon\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds\right)\right]\leq e^{\varepsilon\|u_0\|_{\mathrm{L}^2}^2+\varepsilon t\mathop{\mathrm{Tr}}(Q)}. \end{align} \end{theorem} \begin{proof} Let us apply the infinite dimensional It\^o's formula to the process $\|u(\cdot)\|_{\mathrm{L}^2}^2$ to get \begin{align} &\|u(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s+2\beta\gamma\int_0^t\|u(s)\|_{\mathrm{L}^2}^2ds+2\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds\nonumber\\&=\|u_0\|_{\mathrm{L}^2}^2+2\beta(1+\gamma)\int_0^t(u(s)^2,u(s))ds+\mathop{\mathrm{Tr}}(Q) t+2\int_0^t(d W(s),u(s)). \end{align} We define $$\Theta(t):=\|u(t)\|_{\mathrm{L}^2}^2+\nu\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s+\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds,$$ and apply the infinite dimensional It\^o's formula to the process $e^{\varepsilon\Theta(t)}$ to obtain \begin{align}\label{5.62a} e^{\varepsilon\Theta(t)}&=e^{\varepsilon\|u_0\|_{\mathrm{L}^2}^2}+\varepsilon\int_0^te^{\varepsilon\Theta(s)}\left(-\nu\|u(s)\|_{\mathrm{H}_0^1}^2-\beta\|u(s)\|_{\mathrm{L}^4}^4-2\beta\gamma\|u(s)\|_{\mathrm{L}^2}^2+2\beta(1+\gamma)(u(s)^2,u(s))\right)d s\nonumber\\&\quad +\varepsilon\int_0^te^{\varepsilon\Theta(s)}\mathop{\mathrm{Tr}}(Q)d s+2\varepsilon\int_0^te^{\varepsilon\Theta(s)}\left(dW(s),u(s)\right) +2\varepsilon^2\int_0^te^{\varepsilon\Theta(s)}\|Q^{1/2}u(s)\|_{\mathrm{L}^2}^2d s, \end{align} since $\mathop{\mathrm{Tr}}((u\otimes u)Q)=\|Q^{1/2}u\|_{\mathrm{L}^2}^2$. Note that \begin{align}\label{5.63} 2\beta(1+\gamma) (u^2,u)\leq 2\beta(1+\gamma) \|u\|_{\mathrm{L}^4}^2\|u\|_{\mathrm{L}^2}\leq\beta\|u\|_{\mathrm{L}^4}^4+\beta(1+\gamma)^2\|u\|_{\mathrm{L}^2}^2, \end{align} and \begin{align}\label{5.64} \|Q^{1/2}u\|_{\mathrm{L}^2}^2\leq \|Q\|_{\mathcal{L}(\mathrm{L}^2)}\|u\|_{\mathrm{L}^2}^2\leq \frac{\|Q\|_{\mathcal{L}(\mathrm{L}^2)}}{\pi^2}\|u\|_{\mathrm{H}_0^1}^2. \end{align} Taking expectation in (\ref{5.62a}), and then using (\ref{5.63}) and (\ref{5.64}), we obtain \begin{align}\label{5.66} \mathbb{E}\left[e^{\varepsilon\Theta(t)}\right]&\leq e^{\varepsilon\|u_0\|_{\mathrm{L}^2}^2}+\varepsilon\mathbb{E}\left\{\int_0^te^{\varepsilon\Theta(s)}\left[-\left(\nu-\frac{\beta(1+\gamma^2)}{\pi^2}\right)+\frac{2\varepsilon\|Q\|_{\mathcal{L}(\mathrm{L}^2)}}{\pi^2}\right]\|u(s)\|_{\mathrm{H}_0^1}^2d s\right\} \nonumber\\&\quad+\varepsilon\mathbb{E}\left[\int_0^te^{\varepsilon\Theta(s)}\mathop{\mathrm{Tr}}(Q)d s\right]. \end{align} Now, for $0<\varepsilon\leq \frac{\left(\nu\pi^2-{\beta(1+\gamma^2)}\right)}{2\|Q\|_{\mathcal{L}(\mathrm{L}^2)}}\ \text{ and }\ \nu > \frac{\beta(1+\gamma^2)}{\pi^2}$, we have \begin{align}\label{5.67} \mathbb{E}\left[e^{\varepsilon\Theta(t)}\right]&\leq e^{\varepsilon\|u_0\|_{\mathrm{L}^2}^2} +\varepsilon\int_0^t\mathbb{E}\left[e^{\varepsilon\Theta(s)}\right]\mathop{\mathrm{Tr}}(Q)d s. \end{align} An application of Gronwall's inequality in (\ref{5.67}) yields (\ref{5.68}). \end{proof} \begin{theorem}\label{exps} Let $u(\cdot)$ and $v(\cdot)$ be two solutions of the system (\ref{6.1a}) with the initial data $u_0,v_0\in\mathrm{L}^2(\mathcal{O})$, respectively. Then for \begin{align}\label{610} \nu>\frac{\beta(1+\gamma^2)}{\pi^2}\ \text{ and } \ \nu^3\pi^2-{\beta(1+\gamma^2)}\nu^2\geq 2C\alpha^2\mathop{\mathrm{Tr}}(Q), \end{align}we have \begin{align}\label{5.68a} &\mathbb{E}\left[\|u(t)-v(t)\|_{\mathrm{L}^2}^2\right]\nonumber\\&\leq \|u_0-v_0\|_{\mathrm{L}^2}^2e^{\left\{{\frac{C\alpha^2}{\nu^2}\|u_0\|_{\mathrm{L}^2}^2}\right\}}\exp\left\{-\left[\left(\nu\pi^2-\frac{\beta(1+\gamma^2)}{2}\right)-\frac{C\alpha^2}{\nu^2}\mathop{\mathrm{Tr}}(Q)\right]t\right\}. \end{align} \end{theorem} \begin{proof} Let $w(t)=u(t)-v(t)$ and $w(\cdot)$ satisfies: \begin{equation}\label{7p12} \left\{ \begin{aligned} dw(t)&=[- \nu Aw(t)-\alpha [B(u(t))-B(v(t))]+\beta [c(u(t))-c(v(t))]]dt, \ t\in(0,T),\\ w(0)&=u_0-v_0. \end{aligned} \right. \end{equation} Taking inner product with $w(\cdot)$, it can be easily seen that \begin{align}\label{7.13} \|w(t)\|_{\mathrm{L}^2}^2&=\|w_0\|_{\mathrm{L}^2}^2-2\nu\int_0^t\|w(s)\|_{\mathrm{H}_0^1}^2\d s -2\alpha\int_0^t\langle B(u(s))-B(v(s)),w(s)\rangle d s\nonumber\\&\quad+2\beta\int_0^t(c(u(s))-c(v(s)),w(s))ds\nonumber\\&=\|w_0\|_{\mathrm{L}^2}^2-2\nu\int_0^t\|w(s)\|_{\mathrm{H}_0^1}^2d s-2\alpha\int_0^t(u(s)\partial_xw(s),w(s))ds\nonumber\\&\quad-2\beta\gamma\int_0^t\|w(s)\|_{\mathrm{L}^2}^2ds-2\beta\int_0^t\|u(s)w(s)\|_{\mathrm{L}^2}^2ds-2\beta\int_0^t\|v(s)w(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&\quad+\beta(1+\gamma)\int_0^t((u(s)+v(s))w(s),w(s))ds-\beta\int_0^t(u(s)v(s)w(s),w(s))ds. \end{align} We estimate $-2\alpha(u\partial_xw,w)$ using H\"older's and Young's inequalities as \begin{align}\label{7.14} -2\alpha(u\partial_xw,w)\leq2\alpha\|u\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{H}_0^1}\|w\|_{\mathrm{L}^2}\leq{\nu}\|w\|_{\mathrm{H}_0^1}^2+\frac{C\alpha^2}{\nu}\|u\|_{\mathrm{H}_0^1}^2\|w\|_{\mathrm{L}^2}^2, \end{align} where $C$ is the constant appearing in $\|u\|_{\mathrm{L}^{\infty}}\leq C\|u\|_{\mathrm{H}_0^1}$. We estimate $\beta(1+\gamma)((u+v)w,w)$ and $-\beta(uvw,w)$ using H\"older's and Young's inequalities as \begin{align} \beta(1+\gamma)((u+v)w,w)&\leq\beta(1+\gamma)(\|uw\|_{\mathrm{L}^2}+\|vw\|_{\mathrm{L}^2})\|w\|_{\mathrm{L}^2}\nonumber\\&\leq \frac{\beta}{2}\left(\|uw\|_{\mathrm{L}^2}^2+\|vw\|_{\mathrm{L}^2}^2\right)+\frac{\beta(1+\gamma)^2}{2}\|w\|_{\mathrm{L}^2}^2, \\-\beta(uvw,w)&\leq \frac{\beta}{2}\left(\|uw\|_{\mathrm{L}^2}^2+\|vw\|_{\mathrm{L}^2}^2\right). \label{7p16} \end{align} Combining \eqref{7.14}-\eqref{7p16} and substituting it in \eqref{7.13}, we get \begin{align} \|w(t)\|_{\mathrm{L}^2}^2&\leq \|w_0\|_{\mathrm{L}^2}^2-\int_0^t\left(-\nu\pi^2+\frac{\beta(1+\gamma^2)}{2}\right)\|w(s)\|_{\mathrm{L}^2}^2d s+\frac{C\alpha^2}{\nu}\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2\|w(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&\quad-\beta\gamma\int_0^t\|w(s)\|_{\mathrm{L}^2}^2ds-\beta\int_0^t\|u(s)w(s)\|_{\mathrm{L}^2}^2ds-\beta\int_0^t\|v(s)w(s)\|_{\mathrm{L}^2}^2ds\nonumber\\&\leq \|w_0\|_{\mathrm{L}^2}^2-\int_0^t\left(-\nu\pi^2+\frac{\beta(1+\gamma^2)}{2}\right)\|w(s)\|_{\mathrm{L}^2}^2d s+\frac{C\alpha^2}{\nu}\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2\|w(s)\|_{\mathrm{L}^2}^2ds. \end{align} Thus, by applying Gronwall's inequality, we obtain \begin{align} \|w(t)\|_{\mathrm{L}^2}^2\leq \|w_0\|_{\mathrm{L}^2}^2\exp\left\{-\left(\nu\pi^2-\frac{\beta(1+\gamma^2)}{2}\right)t+\frac{C\alpha^2}{\nu}\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s\right\}. \end{align} and \begin{align} \mathbb{E}\left[\|w(t)\|_{\mathrm{L}^2}^2\right]&\leq\|w_0\|_{\mathrm{L}^2}^2e^{-\left(\nu\pi^2-\frac{\beta(1+\gamma^2)}{2}\right)t}\mathbb{E}\left[\exp\left(\frac{C\alpha^2}{\nu}\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s\right)\right]\nonumber\\& \leq \|w_0\|_{\mathrm{L}^2}^2e^{-\left\{\left(\nu\pi^2-\frac{\beta(1+\gamma^2)}{2}\right)-\frac{C\alpha^2}{\nu^2}\mathop{\mathrm{Tr}}(Q)\right\}t}\exp\left\{{\frac{C\alpha^2}{\nu^2}\|u_0\|_{\mathrm{L}^2}^2}\right\}, \end{align} where we used the bound given in \eqref{5.68} for $\nu^3\pi^2-\beta(1+\gamma^2)\nu^2\geq 2C\alpha^2\|Q\|_{\mathcal{L}(\mathrm{L}^2)}$. Thus, for $\nu>\frac{\beta(1+\gamma^2)}{2\pi^2}$ and $\nu^3\pi^2-\frac{\beta(1+\gamma^2)}{2}\nu^2\geq 2C\alpha^2\mathop{\mathrm{Tr}}(Q)$, we get the required result given in \eqref{5.68a}. Since $\|Q\|_{\mathcal{L}(\mathrm{L}^2)}\leq\mathop{\mathrm{Tr}}(Q)$, the condition given in \eqref{610} is a sufficient condition for obtaining the estimate \eqref{5.68a}. \end{proof} \subsection{Preliminaries} In this subsection, we give the definitions of invariant measures, ergodic, strongly mixing and exponentially mixing invariant measures. Let $\mathscr{E}$ be a Polish space. \begin{definition} A probability measure $\mu$ on $(\mathscr{E},\mathscr{B}(\mathscr{E}))$ is called \emph{an invariant measure or a stationary measure} for a given transition probability function $P(t,x,d y),$ if it satisfies $$\mu(A)=\int_{\mathscr{E}}{P}(t,x,A)d\mu(x),$$ for all $A\in\mathscr{B}(\mathscr{E})$ and $t>0$. Equivalently, if for all $\varphi\in \mathrm{C}_b(\mathscr{E})$ (the space of bounded continuous functions on $\mathscr{E}$), and all $t\geq 0$, $$\int_{\mathscr{E}}\varphi(x)d\mu(x)=\int_{\mathscr{E}}(P_t\varphi)(x)d\mu(x),$$ where the Markov semigroup $(P_t)_{t\geq 0}$ is defined by $$P_t\varphi(x)=\int_{\mathscr{E}}\varphi(y)P(t,x,d y).$$ \end{definition} \begin{definition}[Theorem 3.2.4, Theorem 3.4.2, \cite{GDJZ}] Let $\mu$ be an invariant measure for $\left(P_t\right)_{t\geq 0}.$ We say that the measure $\mu$ is an \emph{ergodic measure,} if for all $\varphi \in \mathrm{L}^2(\mathscr{E};\mu), $ we have $$ \lim_{T\to +\infty}\frac{1}{T}\int_0^T (P_t\varphi)(x) d t =\int_{\mathscr{E}}\varphi(x) d\mu(x) \ \text{ in } \ \mathrm{L}^2(\mathscr{E};\mu).$$ The invariant measure $\mu$ for $\left(P_t\right)_{t\geq 0}$ is called \emph{strongly mixing} if for all $\varphi \in \mathrm{L}^2(\mathscr{E};\mu),$ we have $$\lim_{t\to+\infty}P_t\varphi(x) = \int_{\mathscr{E}}\varphi(x) d\mu(x)\ \text{ in }\ \mathrm{L}^2(\mathscr{E};\mu).$$ The invariant measure $\mu$ for $\left(P_t\right)_{t\geq 0}$ is called \emph{exponentially mixing}, if there exists a constant $k>0$ and a positive function $\Psi(\cdot)$ such that for any bounded Lipschitz function $\varphi$, all $t>0$ and all $x\in\mathscr{E}$, $$\left|P_t\varphi(x)-\int_{\mathscr{E}}\varphi(x)d\mu(x)\right|\leq \Psi(x)e^{-k t}\|\varphi\|_{\text{Lip}},$$ where $\|\cdot\|_{\text{Lip}}$ is the Lipschitz constant. \end{definition} \begin{remark} Clearly exponentially mixing implies strongly mixing. Theorem 3.2.6, \cite{GDJZ} states that if $\mu$ is the unique invariant measure for $(P_t)_{t\geq 0}$, then it is ergodic. \end{remark} The interested readers are referred to see \cite{GDJZ} for more details on the ergodicity for infinite dimensional systems. \subsection{Existence of a unique invariant measure} In this subsection, we show that there exists a unique invariant measure for the Markovian transition probability associated to the system (\ref{6.1a}). Moreover, we show that the invariant measure is ergodic and strongly mixing (in fact exponentially mixing). Let $u(t;u_0)$ denotes the unique strong solution of the system (\ref{6.1a}) with the initial condition $u_0\in\mathrm{L}^2(\mathcal{O}).$ Let $(P_t)_{t\geq 0}$ be the \emph{Markovian transition semigroup} in the space $\mathrm{C}_b(\mathrm{L}^2(\mathcal{O}))$ associated to the system (\ref{6.1a}) defined by \begin{align}\label{mar} P_t\varphi(u_0)=\mathbb{E}\left[\varphi(u(t;u_0))\right]=\int_{\mathrm{L}^2}\varphi(y)P(t,u_0,\d y)=\int_{\mathrm{L}^2}\varphi(y)\mu_{t,u_0}(d y),\;\varphi\in \mathrm{C}_b(\mathrm{L}^2(\mathcal{O})), \end{align} where $P(t,u_0,d y)$ is the transition probability of $u(t;u_0)$ and $\mu_{t,u_0}$ is the law of $u(t;u_0)$. The semigroup $(P_t)_{t\geq 0}$ is Feller, since the solution to \eqref{6.1a} depends continuously on the initial data. From (\ref{mar}), we also have \begin{align}\label{amr} P_t\varphi(u_0)=\left<\varphi,\mu_{t,u_0}\right>=\left<P_t\varphi,\mu\right>, \end{align} where $\mu$ is the law of the initial data $u_0\in\mathrm{L}^2(\mathcal{O})$. Thus from (\ref{amr}), we have $\mu_{t,u_0}=P_t^*\mu$. We say that a probability measure $\mu$ on $\mathrm{L}^2(\mathcal{O})$ is an \emph{invariant measure} if \begin{align} P_t^*\mu=\mu,\textrm{ for all }\ t\geq 0. \end{align} That is, if a solution has law $\mu$ at some time, then it has the same law for all later times. For such a solution, it can be shown by Markov property that for all $(t_1,\ldots,t_n)$ and $\tau>0$, $(u(t_1+\tau;u_0),\ldots,u(t_n+\tau;u_0))$ and $(u(t_1;u_0),\ldots,u(t_n;u_0))$ have the same law. Then, we say that the process $u$ is \emph{stationary}. For more details, the interested readers are referred to see \cite{GDJZ,ADe}, etc. \begin{theorem}\label{EIM} Let $u_0\in\mathrm{L}^2(\mathcal{O})$ be given and $\mathop{\mathrm{Tr}}(Q)<+\infty$. Then, for $\nu>\frac{\beta(1+\gamma^2)}{2\pi^2},$ there exists an invariant measure for the system (\ref{6.1a}) with support in $\mathrm{H}_0^1(\mathcal{O})$. \end{theorem} \begin{proof} Let us apply the infinite dimensional It\^o's formula to the process $\|u(\cdot)\|_{\mathrm{L}^2}^2$ to get \begin{align}\label{7p4} &\|u(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s+2\beta\gamma\int_0^t\|u(s)\|_{\mathrm{L}^2}^2ds+2\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds\nonumber\\&=\|u_0\|_{\mathrm{L}^2}^2+2\beta(1+\gamma)\int_0^t(u(s)^2,u(s))ds+\mathop{\mathrm{Tr}}(Q) t+2\int_0^t(d W(s),u(s))\nonumber\\&\leq \|u_0\|_{\mathrm{L}^2}^2+\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds+\beta(1+\gamma)^2\int_0^t\|u(s)\|_{\mathrm{L}^2}^2ds+\mathop{\mathrm{Tr}}(Q) t+2\int_0^t(d W(s),u(s)). \end{align} Taking expectation in (\ref{7p4}), using Poincar\'e inequality, (\ref{2.10}) and the fact that the final term is a martingale having zero expectation, we obtain \begin{align}\label{5.4} & \mathbb{E}\left\{\|u(t)\|_{\mathrm{L}^2}^2+2\left(\nu-\frac{\beta(1+\gamma^2)}{2\pi^2}\right)\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s+2\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds\right\}\nonumber\\&\leq \mathbb{E}\left[\|u_0\|_{\mathrm{L}^2}^2\right]+\mathop{\mathrm{Tr}}(Q)t. \end{align} Hence, for $\nu>\frac{\beta(1+\gamma^2)}{2\pi^2},$, we have \begin{align}\label{5.6} \frac{\xi}{t}\mathbb{E}\left[\int_0^{t}\|u(s)\|_{\mathrm{H}_0^1}^2d s\right]\leq \frac{1}{T_0}\|u_0\|_{\mathrm{L}^2}^2+\mathop{\mathrm{Tr}}(Q),\text{ for all }t>T_0, \end{align} where $\xi=2\left(\nu-\frac{\beta(1+\gamma^2)}{2\pi^2}\right)$. Thus using Markov's inequality, we have \begin{align}\label{5.7} &\lim_{R\to\infty}\sup_{T>T_0}\left[\frac{1}{T}\int_0^T\mathbb{P}\Big\{\|u(t)\|_{\mathrm{H}_0^1}>R\Big\}d t\right]\nonumber \\&\leq \lim_{R\to\infty}\sup_{T>T_0}\frac{1}{R^2}\mathbb{E}\left[\frac{1}{T}\int_0^T\|u(t)\|_{\mathrm{H}_0^1}^2d t\right]=0. \end{align} Hence along with the estimate in (\ref{5.7}), using the compactness of $\mathrm{H}_0^1(\mathcal{O})$ in $\mathrm{L}^2(\mathcal{O})$, it is clear by a standard argument that the sequence of probability measures $$\mu_{t,u_0}(\cdot)=\frac{1}{t}\int_0^t\Pi_{s,u_0}(\cdot)d s,\ \text{ where }\ \Pi_{t,u_0}(\Lambda)=\mathbb{P}\left(u(t;u_0)\in\Lambda\right), \ \Lambda\in\mathscr{B}(\mathrm{L}^2(\mathcal{O})),$$ is tight, that is, for each $\varepsilon>0$, there is a compact subset $K\subset\mathrm{L}^2(\mathcal{O})$ such that $\mu_t(K^c)\leq \varepsilon$, for all $t>0$, and so by the Krylov-Bogoliubov theorem (or by a result of Chow and Khasminskii see \cite{CHKH}) $\mu_{t_n,u_0}\to\mu$ weakly for $n\to\infty$, and $\mu$ results to be an invariant measure for the transition semigroup $(P_t)_{t\geq 0}$, defined by $$P_t\varphi(u_0)=\mathbb{E}\left[\varphi(u(t;u_0))\right],$$ for all $\varphi\in\mathrm{C}_b(\mathrm{L}^2(\mathcal{O}))$, where $u(\cdot)$ is the unique strong solution of (\ref{6.1a}) with initial condition $u_0\in\mathrm{L}^2(\mathcal{O})$. \end{proof} Now we establish the uniqueness of invariant measure for the system (\ref{6.1a}) using the exponential stability results established in Theorem \ref{exps}. Similar results for 2D stochastic Navier-Stokes equations is established in \cite{ADe}, 2D magnetohydrodynamics systems is obtained in \cite{UMTM} and 2D Oldroyd model of order one is proved in \cite{MTM3}. \begin{theorem}\label{UEIM} Let the conditions given in Theorems \ref{expe} and \ref{exps} hold true and $u_0\in\mathrm{L}^2(\mathcal{O})$ be given. Then, for the condition given in \eqref{5.68a}, there is a unique invariant measure $\mu$ to the system (\ref{6.1a}). The measure $\mu$ is ergodic and strongly mixing, that is, \begin{align}\label{6.9a}\lim_{t\to\infty}P_t\varphi(u_0)=\int_{\mathrm{L}^2}\varphi(v_0)d\mu(v_0), \ \mu\text{-a.s., for all }\ u_0\in\mathrm{L}^2(\mathcal{O})\ \text{ and }\ \varphi\in\mathrm{C}_b(\mathrm{L}^2(\mathcal{O})).\end{align} Moreover, we have \begin{align}\label{7.25} \int_{\mathrm{L}^2}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)d\mu(u_0)<+\infty, \end{align} where \begin{align}\label{721} 0<\varepsilon\leq \frac{\left(\nu\pi^2-{\beta(1+\gamma^2)}\right)}{2\|Q\|_{\mathcal{L}(\mathrm{L}^2)}}\ \text{ and }\ \nu > \frac{\beta(1+\gamma^2)}{\pi^2}. \end{align} \end{theorem} \begin{proof} \textbf{Step (1):} \emph{Uniqueness of invariant measure $\mu$.} For $\varphi\in \text{Lip}(\mathrm{L}^2(\mathcal{O}))$ (Lipschitz $\varphi$), since $\mu$ is an invariant measure, we have \begin{align} & \left|P_t\varphi(u_0)-\int_{\mathrm{L}^2}\varphi(v_0)\mu(d v_0)\right|\nonumber\\&= \left|\mathbb{E}[\varphi(u(t;u_0))]-\int_{\mathrm{L}^2}P_t\varphi(v_0)\mu(d v_0)\right|\nonumber\\&=\left|\int_{\mathrm{L}^2}\mathbb{E}\left[\varphi(u(t;u_0))-\varphi(u(t;v_0))\right]\mu(d v_0)\right|\nonumber\\&\leq L_{\varphi}\int_{\mathrm{L}^2}\mathbb{E}\left\|u(t;u_0)-u(t;v_0)\right\|_{\mathrm{L}^2}\mu(d v_0)\nonumber\\&\leq L_{\varphi}\exp{\left\{{\frac{C\alpha^2}{\nu^2}\|u_0\|_{\mathrm{L}^2}^2}\right\}}e^{-\widehat{\kappa}t}\int_{\mathrm{L}^2}\|u_0-v_0\|_{\mathrm{L}^2}\mu(d v_0)\nonumber\\&\leq L_{\varphi}\exp{\left\{{\frac{C\alpha^2}{\nu^2}\|u_0\|_{\mathrm{L}^2}^2}\right\}}e^{-\widehat{\kappa}t}\left(\|u_0\|_{\mathrm{L}^2}+\int_{\mathrm{L}^2}\|v_0\|_{\mathrm{L}^2}\mu(d v_0)\right)\nonumber\\&\to 0\text{ as } t\to\infty, \end{align} since $\int_{\mathrm{L}^2}\|v_0\|_{\mathrm{L}^2}\mu(d v_0)<+\infty$, where $\widehat{\kappa}=\left(\nu\pi^2-\frac{\beta(1+\gamma^2)}{2}\right)-\frac{C\alpha^2}{\nu^2}\mathop{\mathrm{Tr}}(Q)>0$. Hence, we deduce (\ref{6.9a}), for every $\varphi\in \mathrm{C}_b (\mathrm{L}^2(\mathcal{O}))$, by the density of $\text{Lip}(\mathrm{L}^2(\mathcal{O}))$ in $\mathrm{C}_b (\mathrm{L}^2(\mathcal{O}))$. Note that, we have a stronger result that $P_t\varphi(u_0)$ converges exponentially fast to equilibrium, which is the exponential mixing property. This easily gives uniqueness of the invariant measure also. Indeed if $\widetilde\mu$ is an another invariant measure, then \begin{align} & \left|\int_{\mathrm{L}^2}\varphi(u_0)\mu(d u_0)-\int_{\mathrm{L}^2}\varphi(v_0)\widetilde\mu(d v_0)\right|\nonumber\\&= \left|\int_{\mathrm{L}^2}P_t\varphi(u_0)\mu(d u_0)-\int_{\mathrm{L}^2}P_t\varphi(v_0)\widetilde\mu(d v_0)\right|\nonumber\\&=\left|\int_{\mathrm{L}^2}\int_{\mathrm{L}^2}\left[P_t\varphi(u_0)-P_t\varphi(v_0)\right]\mu(d u_0)\widetilde\mu(d v_0)\right|\nonumber\\&\leq L_{\varphi}\exp{\left\{{\frac{C\alpha^2}{\nu^2}\|u_0\|_{\mathrm{L}^2}^2}\right\}}e^{-\widehat{\kappa}t}\int_{\mathrm{L}^2}\int_{\mathrm{L}^2}\|u_0-v_0\|_{\mathrm{L}^2}\mu(d u_0)\widetilde\mu(d v_0)\nonumber\\&\to 0\ \text{ as }\ t\to\infty, \end{align} and uniqueness follows. By Theorem 3.2.6, \cite{GDJZ}, since $\mu$ is the unique invariant measure for $(P_t)_{t\geq 0}$, we know that it is ergodic. \vskip 0.2cm \noindent \textbf{Step (2):} \emph{Proof of \eqref{7.25}.} In order to prove \eqref{7.25}, we use a stationary solution $u(\cdot)$ with invariant law $\mu$. Note that the process $\|u(\cdot)\|_{\mathrm{L}^2}^2$ satisfies: \begin{align}\label{722} &\|u(t)\|_{\mathrm{L}^2}^2+2\nu\int_0^t\|u(s)\|_{\mathrm{H}_0^1}^2d s+2\beta\gamma\int_0^t\|u(s)\|_{\mathrm{L}^2}^2ds+2\beta\int_0^t\|u(s)\|_{\mathrm{L}^4}^4ds\nonumber\\&=\|u_0\|_{\mathrm{L}^2}^2+2\beta(1+\gamma)\int_0^t(u(s)^2,u(s))ds+\mathop{\mathrm{Tr}}(Q) t+2\int_0^t(d W(s),u(s)). \end{align} Let us now apply the infinite dimensional It\^o's formula to the process $\exp\left(\varepsilon\|u(t)\|_{\mathrm{L}^2}^2\right)$ to obtain \begin{align}\label{7.28} &\exp\left(\varepsilon\|u(t)\|_{\mathrm{L}^2}^2\right)+2\varepsilon\nu\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\|u(s)\|_{\mathrm{H}_0^1}^2d s\nonumber\\&\quad+2\varepsilon\beta\gamma\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\|u(s)\|_{\mathrm{L}^2}^2ds+2\varepsilon\beta\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\|u(s)\|_{\mathrm{L}^4}^4ds\nonumber\\&=\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)+2\varepsilon\beta(1+\gamma)\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)(u(s)^2,u(s))ds\nonumber\\&\quad +\varepsilon\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\mathop{\mathrm{Tr}}(Q)d s+2\varepsilon\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\left(dW(s),u(s)\right) \nonumber\\&\quad+2\varepsilon^2\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\|Q^{1/2}u(s)\|_{\mathrm{L}^2}^2d s. \end{align} Let us take expectation in (\ref{7.28}) and use and an estimate similar to (\ref{5.63}) to obtain \begin{align}\label{7.29} &\mathbb{E}\left[\exp\left(\varepsilon\|u(t)\|_{\mathrm{L}^2}^2\right)\right]+2\varepsilon\left(\nu-\frac{\beta(1+\gamma^2)}{2\pi^2}\right)\mathbb{E}\left[\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\|u(s)\|_{\mathrm{H}_0^1}^2d s\right]\nonumber\\&\quad+\varepsilon\beta\mathbb{E}\left[\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\|u(s)\|_{\mathrm{L}^4}^4ds\right]\nonumber\\&\leq\mathbb{E}\left[\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)\right] +\varepsilon\mathbb{E}\left[\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\mathop{\mathrm{Tr}}(Q)d s\right] \nonumber\\&\quad+2\varepsilon^2\mathbb{E}\left[\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\|Q^{1/2}u(s)\|_{\mathrm{L}^2}^2d s\right]. \end{align} From (\ref{5.64}), we know that $\|Q^{1/2}u\|_{\mathrm{L}^2}^2\leq \frac{\|Q\|_{\mathcal{L}(\mathrm{L}^2)}}{\pi^2}\|u\|_{\mathrm{H}_0^1}^2$ and hence choosing $\varepsilon>0$ given in \eqref{721} so that $$2\varepsilon\|Q^{1/2}u\|_{\mathrm{L}^2}^2\leq \left(\nu-\frac{\beta(1+\gamma^2)}{2\pi^2}\right)\|u\|_{\mathrm{H}_0^1}^2,\ \text{ for } \ \nu>\frac{\beta(1+\gamma^2)}{2\pi^2},$$ and thus from (\ref{7.29}), we obtain \begin{align}\label{7.30} &\mathbb{E}\left[\exp\left(\varepsilon\|u(t)\|_{\mathrm{L}^2}^2\right)\right]+\varepsilon\left(\nu-\frac{\beta(1+\gamma^2)}{2\pi^2}\right)\mathbb{E}\left[\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\|u(s)\|_{\mathrm{H}_0^1}^2d s\right] \nonumber\\&\leq\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right) +\varepsilon\mathop{\mathrm{Tr}}(Q)\mathbb{E}\left[\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)d s\right]. \end{align} We know that $u(\cdot)$ is stationary with law $\mu$, so that we have \begin{align}\label{7p31} \mathbb{E}\left[\exp\left(\varepsilon\|u(t)\|_{\mathrm{L}^2}^2\right)\right]=\mathbb{E}\left[\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)\right]=\int_{\mathrm{L}^2}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)d\mu(u_0), \end{align} and \begin{align}\label{7p32} \mathbb{E}\left[\int_0^t\exp\left(\varepsilon\|u(s)\|_{\mathrm{L}^2}^2\right)\|u(s)\|_{\mathrm{H}_0^1}^2d s\right]=t\int_{\mathrm{L}^2}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)\|u_0\|_{\mathrm{H}_0^1}^2d\mu(u_0). \end{align} Using (\ref{7p31}) and (\ref{7p32}) in (\ref{7.30}), we obtain \begin{align}\label{7p33} \int_{\mathrm{L}^2}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)\|u_0\|_{\mathrm{H}_0^1}^2d\mu(u_0)\leq\frac{\mathop{\mathrm{Tr}}(Q)}{\left(\nu-\frac{\beta(1+\gamma^2)}{2\pi^2}\right)}\int_{\mathrm{L}^2}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)d\mu(u_0). \end{align} Now for $R>0$, using the Poincar\'e inequality and (\ref{7p33}), we have (see \cite{ADe}) \begin{align} \int_{\mathrm{L}^2}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)d\mu(u_0)&=\int_{\|u_0\|_{\mathrm{L}^2}\leq R}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)d\mu(u_0)+\int_{\|u_0\|_{\mathrm{L}^2}> R}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)d\mu(u_0)\nonumber\\&\leq \exp\left(\varepsilon R^2\right)+\frac{1}{R^2}\int_{\mathrm{L}^2}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)\|u_0\|_{\mathrm{L}^2}^2d\mu(u_0)\nonumber\\&\leq \exp\left(\varepsilon R^2\right)+\frac{1}{R^2\pi^2}\int_{\mathrm{L}^2}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)\|u_0\|_{\mathrm{H}_0^1}^2d\mu(u_0)\nonumber\\&\leq \exp\left(\varepsilon R^2\right)+\frac{\mathop{\mathrm{Tr}}(Q)}{R^2\pi^2\left(\nu-\frac{\beta(1+\gamma^2)}{2\pi^2}\right)}\int_{\mathrm{L}^2}\exp\left(\varepsilon\|u_0\|_{\mathrm{L}^2}^2\right)d\mu(u_0). \end{align} Let us take $$\frac{\mathop{\mathrm{Tr}}(Q)}{R^2\pi^2\left(\nu-\frac{\beta(1+\gamma^2)}{2\pi^2}\right)}=\frac{1}{2},$$ to obtain (\ref{7.25}), which completes the proof. \end{proof} \iffalse \section{Exponential Stability} Let us first consider the stationary Burgers-Huxley equation:\footnote{Strictly speaking one has to use $\frac{d}{dx}$ instead of $\frac{\partial}{\partial x}$.} \begin{equation}\label{8.1} \left\{ \begin{aligned} -\nu\frac{\partial^2u_{\infty}}{\partial x^2}+\alpha u_{\infty}\frac{\partial u_{\infty}}{\partial x}-\beta u_{\infty}(1-u_{\infty})(u_{\infty}-\gamma)&=f, \\ u_{\infty}(0)=u_{\infty}(1)&=0. \end{aligned}\right. \end{equation} One can write down the abstract formulation of the equation \eqref{8.1} as \begin{align}\label{8p2} \nu Au_{\infty}+\alpha B(u_{\infty})-\beta c(u_{\infty})=f. \end{align} Given any $f\in\mathrm{H}^{-1}(\mathcal{O})$, our problem is to find \begin{align}\label{8p3} \nu(\partial_xu_{\infty},\partial_xv)+\alpha(B(u_{\infty}),v)-\beta(c(u_{\infty}),v)=\langle f,v\rangle, \ \text{ for all }\ v\in\mathrm{H}_0^1(\mathcal{O}). \end{align} Let us first discuss about the existence and uniqueness of weak solutions of the equation \eqref{8.1}. \begin{theorem} Let $f$ be given and \begin{align}\label{8pp4}\nu>\frac{\beta}{\pi^2}(1+\gamma^2).\end{align} Then the following holds true: \begin{itemize} \item [(i)] For every $f\in\mathrm{H}^{-1}(\mathcal{O})$ and $\nu>0$, there exists at least one solution of the system \eqref{8.1}. \item [(ii)] If $f\in\mathrm{L}^2(\mathcal{O})$, then all solutions belong to $D(A)$, \item [(iii)] If \begin{align}\label{8pp5} \nu>\left(\frac{\alpha}{2\pi^{3/4}}+\frac{2C\beta(1+\gamma)}{\pi^2}+\frac{9C\beta}{\pi^2}\frac{1}{\sqrt{2\nu\kappa}}\|f\|_{\mathrm{H}^{-1}}\right)\frac{1}{\sqrt{2\nu\kappa}}\|f\|_{\mathrm{H}^{-1}}+\frac{\beta(1+\gamma^2)}{2\pi^2}, \end{align} where $\kappa=\frac{1}{\left(\frac{\nu}{2}-\frac{\beta}{2\pi^2}(1+\gamma^2)\right)}$ and $C$ is the constant such that $\|u\|_{\mathrm{L}^{\infty}}\leq C\|u\|_{\mathrm{H}_0^1}$, then the solution of \eqref{8p3} is unique. \end{itemize} \end{theorem} \begin{proof} (i) We show the existence of \eqref{8.1} by implementing a Galerkin approximation method. Let the functions $w_k=w_k(x),$ $k=1,2,\ldots,$ be smooth, the set $\{w_k(x)\}_{k=1}^{\infty}$ be an orthogonal basis of $\mathrm{H}_0^1(\mathcal{O})$ and orthonormal basis of $\mathrm{L}^2(\mathcal{O})$ (page 504, \cite{RDJL}). One can take $\{w_k(x)\}_{k=1}^{\infty}$ as the complete set of normlized eigenfunctions of the operator $-\partial_{xx}$ in $\mathrm{H}_0^1(\mathcal{O})$. For a fixed positive integer $m$, we look for a function $u_m:\mathrm{H}_0^1(\mathcal{O})$ of the form \begin{align}\label{8p4}u_m=\sum\limits_{k=1}^mxi_m^kw_k,\ xi_m^k\in\mathbb{R},\end{align} and \begin{align}\label{8p5} &\nu(\partial_xu_m,\partial_xw_k)+\alpha((u_m)\partial_xu_m,w_k)-\beta(u_m(1-(u_m))((u_m)-\gamma),w_k)=(f,w_k), \end{align} for $k=1,\ldots,m$. The equation \eqref{8p5} is also equivalent to \begin{align} \nu Au_m+\alpha P_mB(u_m)-\beta P_mc(u_m)=P_mf. \end{align} The equations \eqref{8p4}-\eqref{8p5} are system of nonlinear equations for $xi_m^1,\ldots,xi_m^m$ and the existence of solutions is proved in the following way. We use Lemma 1.4, Chapter 2, \cite{Te} to get the existence of solution to the system pf equations \eqref{8p4}-\eqref{8p5}. Let $W=\text{Span}\left\{w_1,\ldots,w_m\right\}$ and the scalar product on $W$ is the scalar product $(\partial_x\cdot,\partial_x\cdot)$ induced by $\mathrm{H}_0^1(\mathcal{O})$ and $P=P_m$ is defined by \begin{align} [P_m(u),v]=(\partial_xP_m(u),\partial_xv)=\nu(\partial_xu,\partial_xv)+\alpha b(u,u,v)-\beta (c(u),v)-(f,v), \end{align} for all $u,v\in W$. The continuity of the mapping $P_m:\mathrm{H}_0^1(\mathcal{O})\to\mathrm{H}_0^1(\mathcal{O})$ is easy to verify. In order to apply Lemma 1.4, Chapter 2, \cite{Te}, we need to show that $$[P_m(u),u]>0, \ \text{ for } \ [u]=k>0,$$ where $[\cdot]$ denotes the norm on $W$. In fact, it is the norm induced by $\mathrm{H}_0^1(\mathcal{O})$. Let us now consider \begin{align} [P_m(u),u]&=\nu\|\partial_xu\|_{\mathrm{L}^2}^2+\beta\gamma\|u\|_{\mathrm{L}^2}^2+\beta\|u\|_{\mathrm{L}^4}^4-\beta(1+\gamma)(u^2,u)-(f,u)\nonumber\\&\geq \nu\|\partial_xu\|_{\mathrm{L}^2}^2+\beta\gamma\|u\|_{\mathrm{L}^2}^2+\beta\|u\|_{\mathrm{L}^4}^4-\beta(1+\gamma)\|u\|_{\mathrm{L}^3}^3-\|f\|_{\mathrm{H}^{-1}}\|\partial_xu\|_{\mathrm{L}^2}\nonumber\\&\geq \frac{\nu}{2}\|\partial_xu\|_{\mathrm{L}^2}^2+\beta\gamma\|u\|_{\mathrm{L}^2}^2+\frac{\beta}{2}\|u\|_{\mathrm{L}^4}^4-\frac{\beta}{2}(1+\gamma)^2\|u\|_{\mathrm{L}^2}^2-\frac{1}{2\nu}\|f\|_{\mathrm{H}^{-1}}^2\nonumber\\&\geq\frac{\nu}{2}\|\partial_xu\|_{\mathrm{L}^2}^2-\frac{\beta}{2}(1+\gamma^2)\|u\|_{\mathrm{L}^2}^2-\frac{1}{2\nu}\|f\|_{\mathrm{H}^{-1}}^2\nonumber\\&\geq\left(\frac{\nu}{2}-\frac{\beta}{2\pi^2}(1+\gamma^2)\right)\|u\|_{\mathrm{H}_0^1}^2-\frac{1}{2\nu}\|f\|_{\mathrm{H}^{-1}}^2. \end{align} It follows that $[P_m(u),u]>0$ for $\|u\|_{\mathrm{H}_0^1}=k$ and $k$ is sufficiently large, more precisely $k>\frac{1}{\nu\sqrt{1-\frac{\beta}{\pi^2\nu}(1+\gamma^2)}}\|f\|_{\mathrm{H}^{-1}}.$ Thus the hypotheses of Lemma 1.4, Chapter 2, \cite{Te} are satisfied and a solution $u_m$ of \eqref{8p5} exists. Multiplying \eqref{8p5} by $xi_m^k$ and then adding from $k=1,\ldots,m$, we find \begin{align}\label{8.9} &\nu\|\partial_xu_m\|_{\mathrm{L}^2}^2+\beta\|u_m\|_{\mathrm{L}^4}^4+\beta\gamma\|u_m\|_{\mathrm{L}^2}^2\nonumber\\&=\beta(1+\gamma)(u_m^2,u_m)+(f,u_m)\nonumber\\&\leq\beta(1+\gamma)\|u_m\|_{\mathrm{L}^4}^2\|u_m\|_{\mathrm{L}^2}+\|f\|_{\mathrm{H}^{-1}}\|u_m\|_{\mathrm{H}_0^1}\nonumber\\&\leq\frac{\beta}{2}\|u_m\|_{\mathrm{L}^4}^4+\frac{\beta}{2}(1+\gamma)^2\|u_m\|_{\mathrm{L}^2}^2+\frac{\nu}{2}\|u_m\|_{\mathrm{H}_0^1}^2+\frac{1}{2\nu}\|f\|_{\mathrm{H}^{-1}}^2, \end{align} where we used H\"older's and Young's inequalities. From \eqref{8.9}, we deduce that \begin{align} \left(\frac{\nu}{2}-\frac{\beta}{2\pi^2}(1+\gamma^2)\right)\|u_m\|_{\mathrm{H}_0^1}^2+\frac{\beta}{2}\|u_m\|_{\mathrm{L}^4}^4\leq\frac{1}{2\nu}\|f\|_{\mathrm{H}^{-1}}^2. \end{align} Using the condition given in \eqref{8pp4}, we have $\|u_m\|_{\mathrm{H}_0^1}^2$ is bounded uniformly and independent of $m$. Since $\mathrm{H}_0^1(\mathcal{O})$ is reflexive, using Banach-Alaoglu theorem, we can extract a subsequence $\{u_{m_k}\}$ of $\{u_m\}$ such that \begin{align} u_{m_k}xrightarrow{w} u_{\infty}, \ \text{ in }\ \mathrm{H}_0^1(\mathcal{O}). \end{align} Since the embedding of $\mathrm{H}_0^1(\mathcal{O})\subset\mathrm{L}^2(\mathcal{O})$ is compact, one can extract a subsequence $\{u_{m_{k_j}}\}$ of $\{u_{m_k}\}$ such that \begin{align} u_{m_{k_j}}\to u_{\infty}, \ \text{ in }\ \mathrm{L}^2(\mathcal{O}). \end{align} Passing to limit in \eqref{8p5} along the subsequence $\{m_{k_j}\}$, we find that $u_{\infty}$ is a solution to \eqref{8p3} and $u_{\infty}$ satisfies \begin{align}\label{8.14} \|u_{\infty}\|_{\mathrm{H}_0^1}\leq\frac{1}{\sqrt{2\nu\kappa}}\|f\|_{\mathrm{H}^{-1}}, \end{align} where $\kappa=\frac{1}{\left(\frac{\nu}{2}-\frac{\beta}{2\pi^2}(1+\gamma^2)\right)}$. (ii) For $f\in\mathrm{L}^2(\mathcal{O})$, clearly $Au_{\infty},B(u_{\infty}),c(u_{\infty})\in\mathrm{L}^2(\mathcal{O})$ and \eqref{8p2} is satisfied as an equality in $\mathrm{L}^2(\mathcal{O})$. Taking inner product with $Au_{\infty}$ in \eqref{8p2}, we find \begin{align}\label{8p14} \nu\|Au_{\infty}\|_{\mathrm{L}^2}^2&=-\alpha (B(u_{\infty}),Au_{\infty})+\beta(c(u_{\infty}),Au_{\infty})+(f,Au_{\infty})\nonumber\\&\leq\alpha\|u_{\infty}\partial_xu_{\infty}\|_{\mathrm{L}^{2}}\|Au_{\infty}\|_{\mathrm{L}^2}+\beta\|c(u_{\infty})\|_{\mathrm{L}^2}\|Au_{\infty}\|_{\mathrm{L}^2}+\|f\|_{\mathrm{L}^2}\|Au_{\infty}\|_{\mathrm{L}^2}\nonumber\\&\leq\frac{3\nu}{4}\|Au_{\infty}\|_{\mathrm{L}^2}^2+\frac{\alpha^2}{2\nu}\|u_{\infty}\|_{\mathrm{L}^{\infty}}^2\|u_{\infty}\|_{\mathrm{H}_0^1}^2+\frac{1}{2\nu}\|f\|_{\mathrm{L}^2}^2\nonumber\\&\quad +\frac{\beta}{2\nu}\left((1+\gamma)\|u_{\infty}\|_{\mathrm{L}^{\infty}}^2+\gamma\|u_{\infty}\|_{\mathrm{L}^{\infty}}+\|u_{\infty}\|_{\mathrm{L}^{\infty}}^3\right). \end{align} Thus, from \eqref{8p14}, we deduce that \begin{align} \frac{\nu}{4}\|Au_{\infty}\|_{\mathrm{L}^2}^2\leq \frac{C\alpha^2}{2\nu}\|u_{\infty}\|_{\mathrm{H}_0^1}^4+\frac{1}{2\nu}\|f\|_{\mathrm{L}^2}^2 +\frac{C\beta}{2\nu}\left((1+\gamma)\|u_{\infty}\|_{\mathrm{H}_0^1}^2+\gamma\|u_{\infty}\|_{\mathrm{H}_0^1}+\|u_{\infty}\|_{\mathrm{H}_0^1}^3\right)<\infty, \end{align} since $u_{\infty}\in\mathrm{H}_0^1(\mathcal{O})$ satisfies \eqref{8.14} and hence $u_{\infty}\in D(A)$. (iii) For uniqueness, we take $u_{\infty}$ and $v_{\infty}$ as two solutions of \eqref{8p3}. Let us define $w_{\infty}:=u_{\infty}-v_{\infty}$. Then $w_{\infty}$ satisfies: \begin{align}\label{8p17} \nu(\partial_xw_{\infty},\partial_xv)+\alpha(B(u_{\infty})-B(v_{\infty}),v)-\beta(c(u_{\infty})-c(v_{\infty}),v)=0, \end{align} for all $v\in\mathrm{H}_0^1(\mathcal{O})$. Taking $v=w_{\infty}$ in \eqref{8p17}, we have \begin{align}\label{8p18} \nu\|\partial_xw_{\infty}\|_{\mathrm{L}^2}^2&=-\alpha(w_{\infty}\partial_xu_{\infty},w_{\infty})-\alpha(v_{\infty}\partial_xw_{\infty},w_{\infty})+\beta(1+\gamma)(u_{\infty}^2-v_{\infty}^2,w_{\infty})\nonumber\\&\quad-\beta\gamma\|w_{\infty}\|_{\mathrm{L}^2}^2-\beta(u_{\infty}^3-v_{\infty}^3,w_{\infty})\nonumber\\&=-\alpha(w_{\infty}\partial_xw_{\infty},w_{\infty})-\alpha(w_{\infty}\partial_xv_{\infty},w_{\infty})-\alpha(v_{\infty}\partial_xw_{\infty},w_{\infty})\nonumber\\&\quad+\beta(1+\gamma)((u_{\infty}+v_{\infty})w_{\infty},w_{\infty})-\beta\gamma\|w_{\infty}\|_{\mathrm{L}^2}^2\nonumber\\&\quad-\beta((u_{\infty}^2+v_{\infty}^2)w_{\infty},w_{\infty})-\beta(u_{\infty}v_{\infty}w_{\infty},w_{\infty})\nonumber\\&=\frac{\alpha}{2}(v_{\infty},\partial_xw_{\infty}^2)+\beta(1+\gamma)(w_{\infty}^2,w_{\infty})+2\beta(1+\gamma)(v_{\infty}w_{\infty},w_{\infty})-\beta\gamma\|w_{\infty}\|_{\mathrm{L}^2}^2\nonumber\\&\quad-\beta\|w_{\infty}\|_{\mathrm{L}^4}^4-3\beta(v_{\infty}w_{\infty}^2,w_{\infty})-3\beta(v_{\infty}^2w_{\infty},w_{\infty}). \end{align} From \eqref{8p18}, using H\"older's, Gagliardo-Nirenberg, Poincar\'e and Young's inequalities, we further have \begin{align}\label{8.19} & \nu\|\partial_xw_{\infty}\|_{\mathrm{L}^2}^2+\beta\gamma\|w_{\infty}\|_{\mathrm{L}^2}^2+\beta\|w_{\infty}\|_{\mathrm{L}^4}^4+3\beta(v_{\infty}^2w_{\infty},w_{\infty})\nonumber\\&= -\frac{\alpha}{2}(\partial_xv_{\infty},w_{\infty}^2)+\beta(1+\gamma)(w_{\infty}^2,w_{\infty})+2\beta(1+\gamma)(v_{\infty}w_{\infty},w_{\infty})-3\beta(v_{\infty}w_{\infty}^2,w_{\infty})\nonumber\\&\leq\frac{\alpha}{2}\|\partial_xv_{\infty}\|_{\mathrm{L}^2}\|w_{\infty}\|_{\mathrm{L}^{4}}^2+\beta(1+\gamma)\|w_{\infty}\|_{\mathrm{L}^4}^2\|w_{\infty}\|_{\mathrm{L}^2}+2\beta(1+\gamma)\|v_{\infty}\|_{\mathrm{L}^{\infty}}\|w_{\infty}\|_{\mathrm{L}^2}^2\nonumber\\&\quad+3\beta\|v_{\infty}\|_{\mathrm{L}^{\infty}}\|w_{\infty}\|_{\mathrm{L}^4}^2\|w_{\infty}\|_{\mathrm{L}^2}\nonumber\\&\leq\frac{\alpha}{2\pi^{3/4}}\|v_{\infty}\|_{\mathrm{H}_0^1}\|w\|_{\mathrm{H}_0^1}^2+\frac{\beta}{2}\|w_{\infty}\|_{\mathrm{L}^4}^4+\frac{\beta(1+\gamma)^2}{2}\|w_{\infty}\|_{\mathrm{L}^2}^2+ 2C\beta(1+\gamma)\|v_{\infty}\|_{\mathrm{H}_0^1}\|w_{\infty}\|_{\mathrm{L}^2}^2\nonumber\\&\quad+\frac{\beta}{4}\|w_{\infty}\|_{\mathrm{L}^4}^4+9C\beta\|v_{\infty}\|_{\mathrm{H}_0^1}^2\|w_{\infty}\|_{\mathrm{L}^2}^2, \end{align} where $C$ is the constant such that $\|u\|_{\mathrm{L}^{\infty}}\leq C\|u\|_{\mathrm{H}_0^1}$. From \eqref{8.19}, we get \begin{align}\label{8.21} &\left\{\nu-\left[\left(\frac{\alpha}{2\pi^{3/4}}+\frac{2C\beta(1+\gamma)}{\pi^2}+\frac{9C\beta}{\pi^2}\|v_{\infty}\|_{\mathrm{H}_0^1}\right)\|v_{\infty}\|_{\mathrm{H}_0^1}+\frac{\beta(1+\gamma^2)}{2\pi^2}\right]\right\}\|w_{\infty}\|_{\mathrm{H}_0^1}^2\nonumber\\&\quad+\frac{\beta}{4}\|w_{\infty}\|_{\mathrm{L}^4}^4+3\beta(v_{\infty}^2w_{\infty},w_{\infty})\leq 0. \end{align} Since $v_{\infty}$ satisfies \eqref{8.14}, from \eqref{8.21}, we obtain \begin{align}\label{8.22} &\left\{\nu-\left[\left(\frac{\alpha}{2\pi^{3/4}}+\frac{2C\beta(1+\gamma)}{\pi^2}+\frac{9C\beta}{\pi^2}\frac{1}{\sqrt{2\nu\kappa}}\|f\|_{\mathrm{H}^{-1}}\right)\frac{1}{\sqrt{2\nu\kappa}}\|f\|_{\mathrm{H}^{-1}}+\frac{\beta(1+\gamma^2)}{2\pi^2}\right]\right\}\|w_{\infty}\|_{\mathrm{H}_0^1}^2\nonumber\\&\leq 0. \end{align} If the condition \eqref{8pp5} is satisfied, then we have $u_{\infty}=v_{\infty}$. \end{proof} Let us now assume that $f$ is independent of $t$ in \begin{equation}\label{2pp7} \left\{ \begin{aligned} \frac{du(t)}{dt}+\nu Au(t)&=-\alpha B(u(t))+\beta c(u(t))+f(t), \ t\in(0,T),\\ u(0)&=u_0\in\mathrm{L}^2(\Omega). \end{aligned} \right. \end{equation} and discuss about the exponential stability results for the stationary solution $u_\infty$. \begin{definition} A weak solution $u(t)$ of the system (\ref{2pp7}) converges to $u_{\infty}$ is \emph{exponentially stable in $\mathrm{L}^2(\mathcal{O})$} if there exist a positive number $a > 0$, such that \begin{align*} \|u(t)-u_{\infty}\|_{\mathrm{L}^2}\leq \|u_0-u_{\infty}\|_{\mathrm{L}^2}e^{-at},\ t\geq 0. \end{align*} In particular, if $u_{\infty}$ is a stationary solution of system (\ref{2pp7}), then $u_{\infty}$ is called \emph{exponentially stable in $\mathrm{L}^2(\mathcal{O})$} provided that any weak solution to (\ref{2pp7}) converges to $u_{\infty}$ at the same exponential rate $a > 0$. \end{definition} \begin{theorem}\label{dexp} Let $u_{\infty}$ be the unique solution of the system (\ref{8p2}). If $u(\cdot)$ is any weak solution of the system (\ref{2pp7}) with $u_0\in\mathrm{L}^2(\mathcal{O})$ and $f\in\mathrm{H}^{-1}(\mathcal{O})$ arbitrary, then we have $u_{\infty}$ is exponentially stable in $\mathrm{L}^2(\mathcal{O})$ and \begin{align}\label{8.24} u(t)\to u_{\infty}\ \text{ in }\ \mathrm{L}^2(\mathcal{O})\text{ as }\ t\to\infty, \end{align} for \begin{align}\label{8.25} \left(\frac{\nu\pi^2}{2}-\frac{\beta(1+\gamma^2)}{2}\right)> \left(\frac{C}{2\nu}+{9C\beta}\right)\|u_{\infty}\|_{\mathrm{H}_0^1}^2+2C\beta(1+\gamma)\|u_{\infty}\|_{\mathrm{H}_0^1}, \end{align} where $u_{\infty}$ satisfies \eqref{8.14}. \end{theorem} \begin{proof} Let us define $w=u-u_{\infty}$, so that $w$ satisfies the following: \begin{equation}\label{8.26} \left\{ \begin{aligned} \frac{dw(t)}{dt}+\nu Aw(t)&=-\alpha (B(u(t))-B(u_{\infty}))+\beta (c(u(t))-c(u_{\infty})), \ t\in(0,T),\\ w(0)&=u_0-u_{\infty}. \end{aligned} \right. \end{equation} Taking inner product with $w$ to the first equation in \eqref{8.26} to find \begin{align}\label{8.27} &\frac{1}{2}\frac{d}{dt}\|w(t)\|_{\mathrm{L}^2}^2+\nu\|\partial_xw(t)\|_{\mathrm{L}^2}^2+\beta\|w(t)\|_{\mathrm{L}^4}^4+\beta\gamma\|w(t)\|_{\mathrm{L}^2}^2+3\beta(u_{\infty}^2w(t),w(t))\nonumber\\&=-\alpha((B(u(t))-B(u_{\infty})),w(t))+\beta((c(u(t))-c(u_{\infty})),w(t))\nonumber\\&=\frac{\alpha}{2}(u_{\infty},w(t)\partial_xw(t))+\beta(1+\gamma)(w(t)^2,w(t))+2\beta(1+\gamma)(u_{\infty}w(t),w(t))\nonumber\\&\qquad-3\beta(u_{\infty}w(t)^2,w(t))\nonumber\\&=:\sum_{i=1}^4J_i, \end{align} where we used \eqref{8p18}. We estimate $J_i$, $i=1,\ldots,4$ using H\"older's, Gagliardo-Nirenberg, Poincar\'e and Young's inequalities as \begin{align}\label{8.28} |J_1|&\leq\|u_{\infty}\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^2}\|\partial_xw\|_{\mathrm{L}^2}\leq\frac{\nu}{2}\|w\|_{\mathrm{H}_0^1}^2+\frac{C}{2\nu}\|u_{\infty}\|_{\mathrm{H}_0^1}^2\|w\|_{\mathrm{L}^2}^2,\\ |J_2|&\leq\beta(1+\gamma)\|w\|_{\mathrm{L}^4}^2\|w\|_{\mathrm{L}^2}\leq\frac{\beta}{2}\|w\|_{\mathrm{L}^4}^4+\frac{\beta(1+\gamma)^2}{2}\|w\|_{\mathrm{L}^2}^2,\\ |J_3|&\leq 2\beta(1+\gamma)\|u_{\infty}\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^4}^2\leq 2C\beta(1+\gamma)\|u_{\infty}\|_{\mathrm{H}_0^1}\|w\|_{\mathrm{L}^2}^2,\\ |J_4|&\leq 3\beta\|u_{\infty}\|_{\mathrm{L}^{\infty}}\|w\|_{\mathrm{L}^4}^2\|w\|_{\mathrm{L}^2}\leq\frac{\beta}{4}\|w\|_{\mathrm{L}^4}^4+{9C\beta}\|u_{\infty}\|_{\mathrm{H}_0^1}^2\|w\|_{\mathrm{L}^2}^2.\label{8.31} \end{align} Combining \eqref{8.28}-\eqref{8.31} and substituting it in \eqref{8.27}, we obtain \begin{align}\label{8.32} &\frac{d}{dt}\|w(t)\|_{\mathrm{L}^2}^2+\frac{\beta}{2}\|w(t)\|_{\mathrm{L}^4}^4+6\beta(u_{\infty}^2w(t),w(t))\nonumber\\&\quad+2\left\{\left(\frac{\nu\pi^2}{2}-\frac{\beta(1+\gamma^2)}{2}\right)-\left[ \left(\frac{C}{2\nu}+{9C\beta}\right)\|u_{\infty}\|_{\mathrm{H}_0^1}^2+2C\beta(1+\gamma)\|u_{\infty}\|_{\mathrm{H}_0^1}\right]\right\}\|w(t)\|_{\mathrm{H}_0^1}^2\nonumber\\&\leq 0. \end{align} For the condition given in \eqref{8.25}, an application of variation of constants formula in \eqref{8.32} yields \begin{align} \|w(t)\|_{\mathrm{L}^2}^2\leq \|w_0\|_{\mathrm{L}^2}^2e^{-\widetilde{\kappa}t}, \end{align} for all $t\in[0,T]$, where $$\widetilde{\kappa}=2\left\{\left(\frac{\nu\pi^2}{2}-\frac{\beta(1+\gamma^2)}{2}\right)-\left[ \left(\frac{C}{2\nu}+{9C\beta}\right)\|u_{\infty}\|_{\mathrm{H}_0^1}^2+2C\beta(1+\gamma)\|u_{\infty}\|_{\mathrm{H}_0^1}\right]\right\},$$ and the exponential stability follows. \end{proof} \fi \medskip\noindent {\bf Acknowledgments:} M. T. Mohan would like to thank the Department of Science and Technology (DST), India for Innovation in Science Pursuit for Inspired Research (INSPIRE) Faculty Award (IFA17-MA110) and Indian Institute of Technology Roorkee, for providing stimulating scientific environment and resources.
proofpile-arXiv_059-15742
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Image down-scaling and compression techniques are widely used to meet the limits of hardware storage and data capacity, which sometimes sacrifice the visual effects as well as bringing troubles to visual detection and recognition. Compression artifact reduction (CAR)~\cite{shen1998review} and single image super-resolution (SISR) \cite{allebach1996edge} have been used in manifold applications, \eg { }digital zoom on smartphones \cite{wronski2019handheld}, video streaming \cite{xiang2020zooming} and print quality enhancement \cite{xiao2017real, xiang2019blockwise} to restore a high-quality and high-resolution image. Since Dong \cite{dong2014learning} first proposed SRCNN that applied a three-layer convolutional neural network (CNN) for the SISR task, more and more works \cite{lim2017enhanced, zhang2018residual, zhang2020residual} have explored how to make use of the deep neural networks (DNN) to achieve better image quality as measured by PSNR and SSIM \cite{wang2004image}, or better visual quality \cite{ledig2017photo, wang2018esrgan} as measured by other perceptual metrics \cite{johnson2016perceptual, ma2017learning}. \begin{figure}[tb] \captionsetup[subfigure]{labelformat=empty} \begin{center} \begin{subfigure}[b]{0.30\linewidth} \includegraphics[width=\linewidth]{img/fig1/lr_1.jpg} \subcaption{Input} \end{subfigure} \begin{subfigure}[b]{0.30\linewidth} \includegraphics[width=\linewidth]{img/fig1/dncnn_rcan_1.png} \subcaption{CAR+SR} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{img/fig1/CAJRN_1.png} \subcaption{Ours (CAJNN)} \end{subfigure} \end{center} \vspace{-5mm} \caption{Demonstration of the joint CAR and SR task. For a user's image without ground truth, our joint CAR and SR model (CAJNN) can generate better output with sharper edges and significantly fewer artifacts compared with the two-stage CAR+SR method.} \label{fig:teaser} \vspace{-4mm} \end{figure} Conventional methods adopt a two-stage pipeline to leverage the quality and resolution of real-world images: first preprocess the user’s photos with a compression artifacts reduction (CAR) algorithm \cite{dong2015compression,galteri2017deep,zhang2008adaptive,zhang2018one,zhang2017beyond}, and then conduct a super-resolution (SR) algorithm \cite{allebach1996edge, dong2014learning, lim2017enhanced, zhang2018residual, zhang2020residual,zhang2018image, atkins2001optimal, dong2016accelerating,shi2016real}. However, the CAR step often causes loss of high-frequency information, which results in a lack of detail in reconstructed SR images. Besides, the computation and data transmission between the two models is time-consuming. To deal with these issues, a single-stage method that jointly solves the Compression Artifact Reduction and Super-Resolution (CARSR) problems is needed to reach a balance between reducing artifacts while retaining most details for the upscale step with a short run-time. Both CAR and SR aim to learn the high-frequency information for reconstruction. Thus, instead of simply concatenating two networks together, we design two functional modules in a single-stage network that reduces the model size by simplifying the two reconstruction processes into one, and can directly obtain high-quality SR output without reconstructing the intermediate artifact-free LR images. Towards this end, we propose a context-aware joint CAR and SR neural network (CAJNN) that can make use of the locally related features in low-quality, low-resolution images to reconstruct high-quality, high-resolution images. To train this network, we construct a paired LR-HR training dataset based on modeling the degradation kernels of web images. Our model turns out to be able to reconstruct high-resolution and artifact-free images with high stability for user’s images from a garden variety of web-apps (\eg { }Facebook, Instagram, WeChat). Figure \ref{fig:teaser} illustrates the performance of our proposed algorithm and the benefits of the single-stage joint CARSR method compared with previous two-stage methods: our result can reconstruct a more visually appealing output with accurate structures, sharp edges, and significantly fewer compression artifacts. These output images are not only more recognizable for human viewers, but also for off-the-shelf computer vision algorithms. In this paper, we demonstrate that our proposed CAJNN can enhance the detection and recognition accuracy of high-level vision tasks by reducing the compression artifacts and increasing the resolution of input images. To summarize, our contributions are mainly three-fold: (1) We propose a novel CAJNN framework that jointly solves the CAR and SR problems for real-world images, that are from unknown devices with unknown quality factors. Here, we explore ways to represent and combine both local and non-local information to enforce image reconstruction performance without knowing the input quality factor. (2) Our experiments show that CAJNN achieves the new state-of-the-art performance on multiple datasets \eg { } Set5 \cite{bevilacqua2012low}, Set14 \cite{zeyde2010single}, BSD100 \cite{martin2001database}, Urban100 \cite{huang2015single}, \etc { }as measured by the PSNR and SSIM \cite{wang2004image} metrics. Compared with the prior art, it generates more stable and reliable outputs for any level of compression quality factors. (3) We provide a new idea for enhancing high-level computer vision tasks like real-scene text recognition and extremely tiny face detection: by preprocessing the input data with our pretrained model, we can improve the performance of existing detectors. Our model demonstrates its effectiveness on the WIDER face dataset \cite{yang2016wider} and the ICDAR2013 Focused Scene Text dataset \cite{karatzas2013icdar}. \section{Related Work} \begin{description}[style=unboxed,leftmargin=0cm] \item[CNN-based Single Image Super-Resolution] Convolutional Neural Network (CNN) methods have demonstrated a remarkable capability to recover LR images with known kernels after the pioneering work of Dong \etal { }\cite{dong2014learning} that adopted a 3-layer CNN to learn an end-to-end mapping from LR images to HR images. The follow-up work FSRCNN \cite{dong2016accelerating} established the general structure of most SR networks until today, which conducts most computations in the low-resolution domain and upsamples the image to the required scale at the end of the network. After 2016, more and more works began to explore how to make the network go deeper. EDSR \cite{lim2017enhanced} reduces the number of parameters by removing the batch normalization layer, and shares the parameters between the low-scale and high-scale models to achieve better training results. RDN \cite{zhang2018residual, zhang2020residual} and RRDB \cite{wang2018esrgan} employ densely-connected residual groups as the major reconstruction block to reach large depth and to allow sufficient low-frequency information to be bypassed. In the meantime, some useful structures have been introduced to enhance the processing speed or output quality. Shi \etal { } \cite{shi2016real} designed a sub-pixel upscaling mechanism. RCAN \cite{zhang2018image} introduces a channel attention mechanism to rescale channel-wise features adaptively, and SAN \cite{dai2019second} exploits a more powerful feature expression with second-order channel attention \item[Compression Artifacts Reduction] Lossy compression methods~\cite{doulamis1998low, wu2017digital} are widely applied in web image transmission due to their higher compression rates. Traditional methods for the CAR problem generally fall into two categories: unsupervised methods, which include removing noise and increasing sharpness \cite{zhang2008adaptive}, and supervised methods like dictionary-based algorithms \cite{liu2015data}. After the success of SRCNN on the super-resolution task, Yu \etal{ }\cite{yu2016deep} directly applied its architecture to compression artifacts suppression. Similar to the development of SR, CNN-based CAR networks also go deeper with the introduction of residual blocks and skip connections \cite{chen2018cisrdcnn,svoboda2016compression,zini2019deep}. Besides, SSIM loss is employed \cite{galteri2017deep} as a supervision method to obtain better performance than MSE loss. JPEG-related priors are also considered in the network structure design, \eg { }DDCN \cite{guo2016building} adds a Discrete Cosine Transform (DCT)-domain before the dual networks, and the D$^3$ \cite{wang2016d3} takes a further step in the practice of dual-domain approaches \cite{liu2015data} by converting sparse-encoding approaches into a one-step sparse inference module. \end{description} Unlike the above approaches that require reconstructing the intermediate clean LR images, our joint CARSR framework directly obtains artifact-free HR images without prior information of quality factors or explicit CAR supervision in the LR domain. \section{Joint Compression Artifacts Reduction and Super-Resolution} Given an LR JPEG image $I^{LRLQ}$, our goal is to reconstruct the high resolution, high-quality image $G(I^{LRLQ})$ that approaches the high-resolution, high-quality ground truth $I^{HRHQ}$ with a generator $G$. The CARSR task can be expressed as: \begin{equation}\label{eq:target} \arg\!\min_{\theta}l(I^{HRHQ}, G(I^{LRLQ}, \theta_g)), \end{equation} where $l$ is any designated loss function (\eg { }MSE, L1, Charbonnier, \etc ). $G$ is the function representing our deep neural network with parameters $\theta$, for which we wish $G_{\theta} \approx F^{-1}((I^{LRLQ}\otimes k)\uparrow_s, q)$, where $\otimes$ stands for the convolution operation, $k$ is the degradation kernel of downsampling method, and $s$ is the downscaling factor. To effectively handle the CARSR task, we propose a single-stage framework, CAJNN. Our proposed model is end-to-end trainable with $I^{HRHQ}$ and $I^{LRLQ}$ pairs according to the function above. The CAJNN framework mainly consists of three modules (see Figure \ref{fig:network}): the \textit{context-aware feature extractor}, the \textit{reconstruction module}, and the \textit{upsampling and enhancement module}. The context-aware feature extractor captures and assembles both intra- and inter-block information from different receptive fields. The reconstruction module further refines the extracted feature maps. Finally, after the processing of the upsampling and enhancement module, these feature maps are converted to high-resolution outputs. \subsection{Model} Here we discuss the CAJNN structure in detail. The majority of our network operates in the feature domain. Given $I^{LRLQ}$ ($c \cdot h \cdot w$ in size), a feature extraction layer first turns the image into feature maps ($n_f \cdot h \cdot w$ in size, where $n_f$ denotes the number of feature channels) in the domain for the following process. The feature map will be converted to a high-resolution image ($c \cdot H \cdot W$ in size) after passing the upsampling and enhancement module. To achieve a balance between GPU capacity and output quality, we apply $n_f = 64$ channels to ensure enough information for the reconstruction. We adopt a $3\times3$ convolution layer that serves as the initial feature extractor. After this module, the input image $I^{LRLQ}$ is turned into a $64\cdot h\cdot w$ tensor $f^{L}$. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{img/Picture1.png} \caption{The network architecture of our proposed CAJNN. It directly reconstructs artifact-free HR images from the LR low-quality images $I^{LRLQ}$. Atrous Spatial Pyramid Pooling (ASPP) is adopted to utilize the inter-block features and intra-block contexts for the joint CARSR task. The reconstruction module turns the features into a deep feature map, which is converted to a high-quality SR output $I^{SRHQ}$ by the upsampling and enhancement module.} \label{fig:network} \end{figure} \subsubsection{Context-aware Feature Extractor} The pipeline of JPEG compression involves the following steps: color space transformation (\eg { }JPEG, H.264/AVC, H.265/HEVC), downsampling, block splitting, discrete cosine transform (DCT), quantization, and entropy encoding. Some previous research assumed that the quality factors of input images are known, and the original images are well-aligned by $8\times 8$ patches with respect to the JPEG block boundaries. However, real-world inputs do not always meet such assumptions. In the worst case, the input images might be compressed multiple times and contain sub-blocks or larger blocks, which requires the model to be insensitive, or even blind to the encoding block alignment. Thus, the spatial context information of both intra- and inter- JPEG blocks is essential for designing a CARSR network. We adopt the ASPP module to extract and integrate multi-scale features with an atrous spatial pooling pyramid (ASPP) \cite{chen2017deeplab}. We adjust the dilation rates of each layer in the pyramid to extend the filter's perceptive field for extracting different ranges of context information, in which the largest field-of-view should cover the $8 \times 8$ block. Besides, we should avoid sampling overlap in different levels of the $3\times 3$ convolutions. Considering the factors above, we choose 1, 3, 4 as the dilation groups to find a good balance between accurately retrieving local details and assimilating context information between adjacent blocks. The input tensors are sent to 3 layers of the pyramid: a $3\times 3$ convolutional layer with dilation rate = 1, a $3\times 3$ convolutional layer with dilation rate = 3, and a $3\times 3$ convolutional layer with dilation rate = 4. The outputs of these three layers are concatenated and aggregated by a $1\times 1$ convolution. The process in ASPP can be described by: \begin{equation} f^{L'} = [C_{3\times 3,1}\otimes f^L | C_{3\times 3,3}\otimes f^L | C_{3\times 3,4}\otimes f^L]\otimes C_{1\times 1, 1}, \end{equation} where $f^{L'}$ denotes the output feature ($64\cdot h \cdot w$ in size), $C_{a\times a, r}$ represents the parameters of $a\times a$ convolution with dilation rate $r$, and $|$ is a concatenation operation. \subsubsection{Reconstruction} RRDB (residual-in-residual dense block) \cite{wang2018esrgan} is applied as the basic block for the reconstruction trunk. Compared with residual blocks, it densely connects the convolution layers to local groups while removing the batch normalization layer. In our network, the reconstruction module includes 20 RRDBs. \subsubsection{Upsampling and Enhancement} After the reconstruction module, the image feature is preprocessed by a $3\times 3$ convolution layer before the PixelShuffle layer \cite{shi2016real} for upsampling. The Pixelshuffle layer produces an HR image from LR feature maps directly with one upscaling filter for each feature map. Compared with the upconvolution, the PixelShuffle layer is $\log_2 s^2$ times faster in theory because of applying sub-pixel activation to convert most of the computations from the HR to the LR domain. The feature $f^{L''}$ is turned into a $c \cdot sh \cdot sw$ HR image by the PixelShuffle layer, which can be described by: \begin{equation} I^{SR'} = PS(W_L\otimes f^{L''} + b_L), \end{equation} where $W_L$ denotes the convolution weights and $b_L$ the bias in the LR domain, $PS$ is a periodic shuffling operator for re-arranging the input LR feature tensor $f^{L''}$ ($c\cdot s^2\cdot h \cdot w$) to a HR tensor of shape $c \cdot sh \cdot sw$: \begin{equation} PS(T)_{x,y,c} = T_{\lfloor x/s \rfloor, \lfloor y/s \rfloor, c \cdot s \cdot \text{mod}(y,s) + c \cdot s \cdot \text{mod}(x,s)}. \end{equation} Instead of directly outputting the high-resolution image, we process it through two $3\times 3$ convolution layers for further enhancement, and get $I^{SR} = C_{3\times 3, 1} \otimes(C_{3\times 3, 1} \otimes I^{SR'})$. To make the major network focus on learning the high-frequency information in the input image, we bilinearly upsample the input LR image $I^{LRLQ}$ and add it to form the final output $G(I^{LRLQ}, \theta_g)$: \begin{equation} G(I^{LRLQ}, \theta_g) = I^{LRLQ}\uparrow_s + I^{SR}. \end{equation} This long-range skip connection changes the target of our major network from directly reconstructing a high-resolution image to reconstructing its residual. By letting the low-frequency information of the input bypass the major network, it lowers the difficulty of reconstruction and increases the convergence speed of the network. \section{Experiments and Analysis} \subsection{Experimental Setup} \begin{description}[style=unboxed,leftmargin=0cm] \item[Training Dataset] In this paper, we choose the DIV2K dataset \cite{Agustsson_2017_CVPR_Workshops} (800 RGB images of 2k resolution) and Flick2K dataset \cite{timofte2017ntire} (2650 RGB images of 2k resolution) as our training set. To get the training pairs that approach the degradation kernels of web images, we first model the downsampling and compression types of popular web platforms. We also discover that adding severely compressed samples to the training set can improve the output quality in terms of PSNR, even for input images compressed with high-quality factors. Based on these pre-experimental results, the images of the training set are downsampled with a scaling factor $s=4$ and compressed by MATLAB \cite{MATLAB:2018b} with random quality factors from 10 to 100. Besides, we perform data augmentation on these images by randomly cropping, randomly rotating by $90^{\circ}$, $180^{\circ}$, and $270^{\circ}$, and randomly horizontal-flipping. As a result, each cropped image patch can have eight augmentation types at maximum. \item[Test Datasets] We compare the performance of our model and previous methods on Set5 \cite{bevilacqua2012low}, Set14 \cite{zeyde2010single}, BSD100 \cite{martin2001database}, Urban100 \cite{huang2015single} and Manga109 \cite{matsui2017sketch}. Each image is downscaled $\times 4$ and compressed with quality factors of 10, 20 and 40 to be consistent with previous works. \item[Implementation Details] Our network is trained on one Nvidia Titan Xp graphics card. The batch size is 36, and the patch size is 128 for ground truth and 32 for low-resolution input. We use Adam \cite{kingma2014adam} as the optimizer with a cosine annealing learning rate, in which the initial learning rate is $2e-4$, and the minimum learning rate is $1e-7$. The scheduler restarts every $2.5e5$ iterations. The network is trained for $1e6$ iterations in total. \end{description} \subsection{Results for Image Quality Assessment} \begin{description}[style=unboxed,leftmargin=0cm] \item[Comparison with SOTA on Standard Test Sets] We compare the performance of CAJNN to the previous state-of-the-art (SOTA) methods on the standard test sets as mentioned above. We report the PSNR and SSIM \cite{wang2004image} of the Y channels in the test sets to be consistent with previous works. We also show the number of parameters and the inference time on Set5 in Table \ref{tab:sr_result}. Depending on the workflow for solving the CAR and SR problem, these methods can be categorized into the following three types: (1) \textit{SR:} directly use pretrained SR models. (2) \textit{CAR+SR:} the aforementioned two-stage method, which first removes the compression artifacts and then sends the output images to the SR model. (3) \textit{Joint CAR \& SR:} the single-stage method that jointly handles CAR and SR with one model. We report both the direct output and the self-ensembled \cite{zhang2018residual} output of our network According to Table \ref{tab:sr_result}, CAJNN significantly outperforms the existing methods for all QFs, and yielding the highest overall PSNR across all five datasets with Set5. The improvement is consistently observed on SSIM, as well. Moreover, our model is more light-weight than most of the current models, including one-stage and two-stage in summation, which results in faster inference speed on the same hardware (all the tests are conducted on one Nvidia Titan Xp graphics card). \begin{table*} \caption{Quantitative comparison of applying SOTA SR methods, two-stage SR and CAR methods, and our CAJNN. The best two results are highlighted in \textcolor{red}{red} and \textcolor{blue}{blue} colors, respectively. Our method greatly outperforms all two-stage methods in terms of PSNR and SSIM, while having a relatively small model size and shorter runtime. The runtime (inference only) is measured on the entire Set5.} \resizebox{\textwidth}{!}{ \begin{tabular}{c|llcccccccccccc} \hline \multirow{2}{*}{QF} & \multirow{2}{*}{Method} & \multirow{2}{*}{Network} & \multirow{2}{*}{Runtime (s)} & \multirow{2}{*}{Parameters (Million)} & \multicolumn{2}{c}{Set5} & \multicolumn{2}{c}{Set14} & \multicolumn{2}{c}{BSD100} & \multicolumn{2}{c}{Urban100} & \multicolumn{2}{c}{Manga109} \\ & & & & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \multirow{10}{*}{10} & \multirow{4}{*}{SR} & Bicubic & - & - & 23.99 & 0.6329 & 22.94 & 0.5513 & 23.33 & 0.5303 & 20.95 & 0.5182 & 21.94 & 0.6383 \\ & & EDSR & 1.94 & 43.1 & 23.41 & 0.6019 & 22.48 & 0.5272 & 22.96 & 0.5098 & 20.57 & 0.5006 & 21.53 & 0.6151 \\ & & RCAN & 2.04 & \textcolor{blue}{16} & 23.14 & 0.5733 & 22.29 & 0.5064 & 22.78 & 0.4984 & 20.36 & 0.4819 & 21.21 & 0.5878 \\ & & RRDB &\textcolor{blue}{ 0.65 } & 16.7 & 22.43 & 0.5223 & 22.86 & 0.5051 & 20.43 & 0.4940 & 20.43 & 0.4940 & 21.34 & 0.6075 \\ \cline{2-15} & \multirow{2}{*}{CAR+SR} & ARCNN+RRDB & 3.20+0.65 & 0.56+16.7 & 24.21 & 0.6699 & 23.38 & 0.5774 & 23.63 & 0.5474 & 21.28 & 0.5466 & 22.36 & 0.6856 \\ & & DnCNN+RRDB & 0.38+0.65 & 0.06+16.7 & 24.07 & 0.6434 & 23.13 & 0.5582 & 23.37 & 0.5324 & 21.04 & 0.5305 & 22.10 & 0.6532 \\ \cline{2-15} & \multirow{2}{*}{Joint CAR\&SR} & CAJNN (ours) & \textcolor{red}{0.48} & \textcolor{red}{ 14.8 } & \textcolor{blue}{25.04} & \textcolor{blue}{0.7169} & \textcolor{blue}{23.95} & \textcolor{blue}{0.6028} & \textcolor{blue}{23.84} & \textcolor{blue}{0.5598} & \textcolor{blue}{21.97} & \textcolor{blue}{0.5977} & \textcolor{blue}{23.29} & \textcolor{blue}{0.7333} \\ & & CAJNN (ours, self-ensembled) & 2.50 & \textcolor{red}{ 14.8 } & \textcolor{red}{25.14} & \textcolor{red}{0.7202} & \textcolor{red}{24.03} & \textcolor{red}{0.6052} & \textcolor{red}{23.88} & \textcolor{red}{0.5610} & \textcolor{red}{22.18} & \textcolor{red}{0.6051} & \textcolor{red}{23.44} & \textcolor{red}{0.7377} \\ \hline \multirow{10}{*}{20} & \multirow{4}{*}{SR} & Bicubic & - & - & 25.32 & 0.6761 & 23.85 & 0.5870 & 24.14 & 0.5611 & 21.66 & 0.5526 & 22.84 & 0.6724 \\ & & EDSR & 1.94 & 43.1 & 24.76 & 0.6490 & 23.59 & 0.5707 & 23.88 & 0.5482 & 21.38 & 0.5427 & 22.58 & 0.6549 \\ & & RCAN & 2.04 & \textcolor{blue}{16} & 24.44 & 0.6226 & 23.40 & 0.5502 & 23.65 & 0.5351 & 21.12 & 0.5234 & 22.14 & 0.6253 \\ & & RRDB &\textcolor{blue}{ 0.65 } & 16.7 & 24.65 & 0.6450 & 23.57 & 0.5661 & 23.79 & 0.5442 & 21.25 & 0.5365 & 22.38 & 0.6474 \\ \cline{2-15} & \multirow{2}{*}{CAR+SR} & ARCNN+RRDB & 3.20+0.65 & 0.56+16.7 & 25.40 & 0.7082 & 24.30 & 0.6091 & 24.39 & 0.5755 & 22.02 & 0.5811 & 23.52 & 0.7172 \\ & & DnCNN+RRDB & 0.38+0.65 & 0.06+16.7 & 25.55 & 0.6946 & 24.24 & 0.6001 & 24.28 & 0.5679 & 21.90 & 0.5732 & 23.24 & 0.6961 \\ \cline{2-15} & \multirow{2}{*}{Joint CAR\&SR} & CAJNN (ours) & \textcolor{red}{0.48} & \textcolor{red}{ 14.8 } & \textcolor{blue}{26.59} & \textcolor{blue}{0.7604} & \textcolor{blue}{25.03} & \textcolor{blue}{0.6391} & \textcolor{blue}{24.70} & \textcolor{blue}{0.5924} & \textcolor{blue}{23.06} & \textcolor{blue}{0.6482} & \textcolor{blue}{24.81} & \textcolor{blue}{0.7783} \\ & & CAJNN (ours, self-ensembled) & 2.50 & \textcolor{red}{ 14.8 } & \textcolor{red}{26.65} & \textcolor{red}{0.7633} & \textcolor{red}{25.10} & \textcolor{red}{0.6404} & \textcolor{red}{24.74} & \textcolor{red}{0.5936} & \textcolor{red}{23.28} & \textcolor{red}{0.6550} & \textcolor{red}{24.98} & \textcolor{red}{0.7820} \\ \hline \multirow{10}{*}{40} & \multirow{4}{*}{SR} & Bicubic & - & - & 26.38 & 0.7154 & 24.55 & 0.6201 & 24.77 & 0.5898 & 22.26 & 0.5877 & 23.66 & 0.7081 \\ & & EDSR & 1.94 & 43.1 & 26.01 & 0.6972 & 24.48 & 0.6120 & 24.62 & 0.5836 & 22.18 & 0.5893 & 23.73 & 0.7003 \\ & & RCAN & 2.04 & \textcolor{blue}{16} & 25.70 & 0.6726 & 24.30 & 0.5936 & 24.36 & 0.5704 & 21.86 & 0.5690 & 23.13 & 0.6673 \\ & & RRDB &\textcolor{blue}{ 0.65 } & 16.7 & 25.99 & 0.6958 & 24.50 & 0.6079 & 24.54 & 0.5804 & 22.10 & 0.5851 & 23.50 & 0.6918 \\ \cline{2-15} & \multirow{2}{*}{CAR+SR} & ARCNN+RRDB & 3.20+0.65 & 0.56+16.7 & 26.65 & 0.7495 & 25.16 & 0.6424 & 25.06 & 0.6053 & 22.82 & 0.6235 & 24.68 & 0.7578 \\ & & DnCNN+RRDB & 0.38+0.65 & 0.06+16.7 & 26.87 & 0.7403 & 25.15 & 0.6373 & 25.00 & 0.5995 & 22.78 & 0.6194 & 24.42 & 0.7404 \\ \cline{2-15} & \multirow{2}{*}{Joint CAR\&SR} & CAJNN (ours) & \textcolor{red}{0.48} & \textcolor{red}{ 14.8 } & \textcolor{blue}{28.05} & \textcolor{blue}{0.7981} & \textcolor{blue}{25.96} & \textcolor{blue}{0.6729} & \textcolor{blue}{25.43} & \textcolor{blue}{0.6240} & \textcolor{blue}{24.09} & \textcolor{blue}{0.6962} & 26.25 & \textcolor{blue}{0.8177} \\ & & CAJNN (ours, self-ensembled) & 2.50 & \textcolor{red}{ 14.8 } & \textcolor{red}{28.16} & \textcolor{red}{0.7993} & \textcolor{red}{26.03} & \textcolor{red}{0.6742} & \textcolor{red}{25.46} & \textcolor{red}{0.6251} & \textcolor{red}{24.31} & \textcolor{red}{0.7011} & \textcolor{red}{26.44} & \textcolor{red}{0.8211} \\ \hline \end{tabular} } \label{tab:sr_result} \end{table*} Figure \ref{fig:jpeg_combine} gives a qualitative example of the result of our model, where the input image is \textit{woman} from the Set5 \cite{bevilacqua2012low} that is downsampled and compressed by a wide range of quality factors from 10 to 100. It is worth noting that compression with very low quality factors causes a significant color shift to the hue and spatial distribution of the original image, which can be seen in the leftmost LR image (QF = 10). Our model is able to correctly restore the color aberrations of RGB images with a high consistency among different QFs. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{img/jpeg_combine.png} \caption{The qualitative result of our network from compressed images with different quality factors (zoom in for a better view). Our model is able to reconstruct reasonable SR images, even at extremely low quality factors. Besides, our results are free of color jittering and other inconsistencies for such a wide range of compression ratios. The image is the ``woman" image from Set5 \cite{bevilacqua2012low}.} \label{fig:jpeg_combine} \end{figure} \item[Results on User Images] Besides the above experiments on standard test images, we also conduct experiments on real user images to demonstrate the effectiveness of our model. We mainly focus on the perceptual effect since there are no ground-truth images. Figure \ref{fig:userimage} shows the CAJNN results on real-world image from the WIDER face dataset \cite{yang2016wider}. For comparison, RCAN \cite{zhang2018image} and RRDB \cite{wang2018esrgan} are used as representative SR method, ARCNN \cite{dong2015compression} and DnCNN \cite{svoboda2016compression} are used as the representative CAR methods. The real-world images have unknown downsampling kernels and compression mechanisms, depending on the platforms. According to Figure \ref{fig:userimage}, the SR methods generate images with obvious color shift and ringing artifacts. These artifacts are alleviated with two-stage methods. Still, the results are blurry. Compared with the two-stage methods, our CAJNN can provide SR outputs with sharp edges and rich details, which demonstrates the superiority of our proposed single-stage method when applied to real-world CARSR problems. \end{description} \begin{figure*}[htbp] \scriptsize \centering \begin{tabular}{cc} \begin{adjustbox}{valign=t} \begin{tabular}{c} \includegraphics[width=0.155\textwidth]{img/fig2/user_fr.jpg} \\ Full-size input \end{tabular} \end{adjustbox} \begin{adjustbox}{valign=t} \begin{tabular}{ccccccc} \includegraphics[width=0.11 \textwidth]{img/fig2/lr_1.jpg} \hspace{-4mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/rcan_21.png} \hspace{-4mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/arcnn_rrdb_21.png} \hspace{-4mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/arcnn_rcan_21.png}\hspace{-3.5mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/dncnn_rrdb_21.png}\hspace{-3.5mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/dncnn_rcan_21.png}\hspace{-3.5mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/CAJRN_21.jpeg}\hspace{-3.5mm} \\ \includegraphics[width=0.11 \textwidth]{img/fig2/lr_23.jpg} \hspace{-4mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/rcan_23.png} \hspace{-4mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/arcnn_rrdb_23.png} \hspace{-4mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/arcnn_rcan_23.png}\hspace{-3.5mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/dncnn_rrdb_23.png}\hspace{-3.5mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/dncnn_rcan_23.png}\hspace{-3.5mm} & \includegraphics[width=0.11 \textwidth]{img/fig2/CAJRN_23.png}\hspace{-3.5mm} \\ Input \hspace{-4mm} & RCAN \hspace{-4mm} & ARCNN+RRDB \hspace{-4mm} & ARCNN+RCAN\hspace{-4mm} & DnCNN+RRDB\hspace{-4mm} & DnCNN+RCAN\hspace{-4mm} & \textbf{Ours (CAJNN)}\hspace{-4mm} \\ \end{tabular} \end{adjustbox} \end{tabular} \caption{CAR \& SR performance comparison of different methods on a user's image from the WIDER face dataset \cite{yang2016wider}. Compared with previous methods, our model can generate artifact-free high-resolution images with sharp edges.} \label{fig:userimage} \vspace{-5mm} \end{figure*} \subsection{Results for Low-Resolution Text Recognition} Comparing the input LR image and our output in Figure \ref{fig:userimage}, the texts become more readable after being processed by our model. Inspired by this observation, we conducted the following experiments to explore our model's potential to leverage a real-scene text recognition task from low-resolution characters. We compare the total accuracy of generic text detection on the ICDAR2013 Focused Scene Text dataset \cite{karatzas2013icdar} with TPS-ResNet-BiLSTM-Attn \cite{baek2019wrong} as the text recognition method. The baseline result is acquired by directly recognizing the original input images. As a comparison with the baseline, we use the CAJNN model as described in previous sections to generate artifact-free SR images from the original images and conduct recognition on the output images. As can be seen in Table \ref{tab:ocr}, the preprocessing of CAJNN improves the recognition accuracy from 85.30\% to 85.75\%, which indicates that the outputs of our model are not only visually appealing to human viewers, but also include more distinct information for the text recognition network as shown in Figure \ref{fig:ocr}. It is worth noting that our output image is $4\times$ the size compared with the baseline inputs, and the average detection time is increased from 31.22s to 41.56s. Although the improvement in accuracy demonstrates the positive effect yielded by our model, the rise in computation is hard to ignore. Therefore, we disentangle the influence of SR and CAR by bicubicly downsampling the CARSR output images and acquire the third recognition result. Since the image size remains the same as that of the original image, the detection time is identical to the baseline. Compared with the baseline, the recognition accuracy still improves 0.27\% due to the reduction of compression artifacts, which indicates that our model is capable of extracting and maintaining critical features of input images. This experimental result points out a plausible direction for future text recognition research: the image quality plays a vital role in the recognition accuracy, which can be improved by utilizing the learned priors from a pretrained CARSR model. \begin{table}[htbp] \caption{Text recognition accuracy on the ICDAR 2013 Focused Scene Text dataset \cite{karatzas2013icdar}. Compared with the baseline method, the introduction of our CARSR method improves the detection performance by 0.45\% (without downsampling) and 0.27\% (with downsampling). } \vspace{2mm} \resizebox{\columnwidth}{!}{% \begin{tabular}{l c c } \hline Method & Accuracy & Detection Time (s)\\ \hline Baseline \cite{baek2019wrong} & 85.30\% & 31.22 \\ Ours + Baseline \cite{baek2019wrong} & \textbf{85.75}\% & 41.56 \\ Ours + Downsample + Baseline \cite{baek2019wrong} & 85.57\% & 31.22 \\ \hline \end{tabular} } \label{tab:ocr} \end{table} \begin{figure}[htbp] \captionsetup[subfigure]{labelformat=empty} \begin{center} \begin{subfigure}[b]{0.23\linewidth} \includegraphics[width=\linewidth]{img/word_161.png} \end{subfigure} \begin{subfigure}[b]{0.23\linewidth} \includegraphics[width=\linewidth]{img/word_161_sr_x4.png} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=0.7667\linewidth]{img/word_161_sr_lr.png} \end{subfigure} \begin{subfigure}[b]{0.23\linewidth} \includegraphics[width=\linewidth]{img/word_836.png} \subcaption{GT} \end{subfigure} \begin{subfigure}[b]{0.23\linewidth} \includegraphics[width=\linewidth]{img/word_836_sr_x.png} \subcaption{Ours ($\times 4$)} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=0.7667\linewidth]{img/word_836_sr_lr.png} \subcaption{Ours (downsampled)} \end{subfigure} \end{center} \vspace{-5mm} \caption{Test samples of ICDAR2013 dataset \cite{karatzas2013icdar} (\textit{word\_161}, \textit{word\_836}). The first column shows original input images, the second column is the CARSR output generated by our method, and the third column is acquired by downsampling the second column. By comparing the detection results in the first and second columns, our method can serve as a supportive method for the recognition of low-resolution texts. Besides, the artifact-free image in the third column can also provide more recognizable features for the baseline model without increasing the image size.} \label{fig:ocr} \vspace{-4mm} \end{figure} \subsection{Results for Extremely Tiny Face Detection} Extremely tiny face detection is another practical, yet challenging task in high-level computer vision. Most of the state-of-the-art (SOTA) face detectors \cite{zhu2018seeing,yoo2019extd} for in-the-wild images have already taken various scales and distortions into consideration to achieve impressive detection performance. \cite{bai2018finding} proposed a solution to tackle tiny face detection by explicitly restoring an HR face from a small blurry one using a Generative Adversarial Network (GAN) \cite{goodfellow2014generative}. We experimentally validate the effect achieved by our CAJNN on tiny face images in the WIDER FACE dataset \cite{yang2016wider} by comparing the detection results from the following three types of data: original HR (serves as the baseline), downsampled LR (serves as the extremely tiny face inputs), and CARSR outputs from our model. \cite{hu2017finding} is applied as the backbone face detector (We use an unofficial PyTorch \cite{paszke2019pytorch} implementation provided by \hyperlink{https://github.com/varunagrawal/tiny-faces-pytorch}{https://github.com/varunagrawal/tiny-faces-pytorch}). Table \ref{tab:tinyface} shows the Average Precision (AP) of the downsampled tiny images and our enhanced ones on all the three validation sets (easy, medium, and hard) of WIDER FACE \cite{yang2016wider}. From Table \ref{tab:tinyface}, we observe that the data processed by CAJNN dramatically improves the detection of LR inputs from 0.317 to 0.611 in AP on the hard set. The reason is that the baseline detector performs downsampling operations by large strides on the tiny faces. Considering the fact that the tiny faces themselves contain less information than average, the detailed information of face structure is lost after several downsampling convolutions. In contrast, our CAJNN provides an artifact-free SR image, which can boost the detection performance by better utilizing the information of small faces. In Figure \ref{fig:tinyface}, the precision-recall curve of our reconstructed image (green line) is close to the ground truth (red line) on the easy and medium subsets. In the hard subset, our CAJNN yields a significant improvement compared to the LR curve. The gap between our output and the GT is due to the irreversible loss of information in extremely tiny faces that happens more frequently in the hard set during the downsampling process. \begin{table}[tb] \caption{Average precision of three data types in the WIDER FACE validation set \cite{yang2016wider} with the same face detector \cite{hu2017finding}. The application of our CARSR method greatly improves the detection performance with LR images on all three subsets.} \begin{center} \resizebox{0.7\columnwidth}{!}{% \begin{tabular}{l c c c} \hline Input Data & Easy & Medium & Hard\\ \hline GT & 0.900 & 0.887 & 0.792 \\ LR & 0.824 & 0.692 & 0.317\\ LR + Ours & 0.893 & 0.857 & 0.611 \\ \hline \end{tabular} } \end{center} \label{tab:tinyface} \end{table} \begin{figure*}[htbp] \captionsetup[subfigure]{labelformat=empty} \begin{center} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{img/easy.pdf} \subcaption{Easy} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{img/medium.pdf} \subcaption{Medium} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{img/hard.pdf} \subcaption{Hard} \end{subfigure} \end{center} \vspace{-5mm} \caption{The precision-recall curve of three subsets in WIDER FACE \cite{yang2016wider}. The AOC (area under curve) reflects the detector's performance on each type of data (\textcolor{red}{GT}, \textcolor{blue}{LR} and \textcolor{green}{CARSR}). With preprocessing by our model, the detection performance of tiny images can be improved close to that achieved with GT. (Zoom in for a better view.)} \label{fig:tinyface} \end{figure*} \subsection{Ablation Study} \begin{description}[style=unboxed,leftmargin=0cm] \item[Effect of Multi-scale Information] As discussed in previous sections, both intra- and inter-block context information is important for designing a CARSR network. In other low-level vision tasks, context information at different scales has already been proved to be effective in improving network performance. Inspired by the first convolution layer of the ResNet \cite{he2016deep}, previous researchers \cite{niklaus2018context} applied $7\times 7$ convolution to extract the context features for the video frame interpolation task. However, such big kernels bring a tremendous number of parameters to the network, especially when embedded in the feature domain, resulting in higher computational cost. Another way of enlarging the filter's receptive field is to use a non-local module \cite{liu2018non,wang2018non}, where the input images are downsampled by convolutional strides and processed at different scales. The non-local module has a rather complex structure and also a large number of parameters. In order to use the context information in a much simpler and lighter representation, our method adopts atrous convolution. By adjusting the dilation rate $r$, the filter can incorporate the context information from a larger receptive field without dramatically increasing the number of parameter as compared to the above methods. \end{description} \begin{table}[tb] \caption{Ablation Study on the validation set (Set5). We report the performance of CAJNN without the long-range skip connection and ASPP as the baseline. Rows 1-3 show the influence of different ways to extract contextual information by replacing ASPP with other network structures. Rows 4-5 compare the effect of two different upsampling methods on PSNR. The combination of the ASPP and Pixelshuffle modules yields the best performance, and thus is adopted in our network architecture.} \resizebox{\columnwidth}{!}{% \begin{tabular}{l c c c c c} \hline Model & Base & 1 & 2 & 3 & 4\\ \hline Non-local module & & $\surd$ & & & \\ ASPP & & & $\surd$ & $\surd$ & \\ Seqeuntial atrous pooling & & & & & $\surd$ \\ \hline Upconvolution & $\surd$ & $\surd$ & $\surd$ & & \\ Pixelshuffle & & & & $\surd$ & $\surd$\\ \hline PSNR (dB) & 27.868 & 28.274 & 28.276 & \textbf{28.292} & 28.262\\ \hline \end{tabular}} \label{tab:sr_ablation} \end{table} We conduct an ablation study to illustrate the effect of different ways of representing contextual information in Table \ref{tab:sr_ablation}. In Rows 1-3, we compare the performance of the non-local module, ASPP, and sequential atrous pooling. Comparing the base model to Column 1 in Table \ref{tab:sr_ablation}, we can conclude that the introduction of multi-scale information via a non-local module can significantly improve the PSNR by 0.406 dB. This result validates the superiority of aggregating both intra- and inter-block features rather than using a purely local representation for the CARSR task. Furthermore, as seen by comparing Columns 1 and 2, replacing the non-local module by our well-designed ASPP can improve the PSNR by 0.002 dB. Although the improvement is rather small, it is worth noting that the ASPP has fewer convolution layers and parameters, which results in a smaller model size and fewer FLOPs. Remarkably, it can achieve results that are comparable, or even better than that yielded by models with more parameters. By comparing Columns 3 and 4, we also note that the PSNR of ASPP is higher than that of sequential atrous pooling by 0.03 dB, which means that the pyramid-fusion structure is more efficient in representing the multi-scale information. Finally, by comparing Columns 2 and 3 of Table \ref{tab:sr_ablation}, we can observe that the PixelShuffle layer brings a 0.16 dB improvement to PSNR. \begin{description}[style=unboxed,leftmargin=0cm] \item[End-to-End Supervision by Joint CAR and SR] Another ablation study on supervising the CARSR task is conducted to illustrate the effect of joint end-to-end training. Instead of supervising with $I^{HRHQ}$, we attempt to disentangle the CAR and SR by introducing a reconstruction loss according to the definition in Equation \ref{lrhq}, where we can generate an artifact-free LR image $I^{LRHQ}$ from the ground truth $I^{HRHQ}$: \end{description} \begin{equation} \label{lrhq} I^{LRHQ} = (k \otimes I^{HRHQ})\downarrow_s, \end{equation} and use it to explicitly supervise the intermediate CAR output $\hat{G}(f^{L'})$ after the context-aware module: \begin{equation} l^{LR} = l(I^{LQHQ}, \hat{G}(f^{L'})). \end{equation} Denoting the pixel-wise loss of the final output and ground truth (shown in Equation~\ref{eq:target}) as $l_{HR}$, the overall training loss becomes: \begin{equation} l = l^{HR} + \lambda l^{LR}, \end{equation} by increasing the weight $\lambda$, we can acquire models trained with higher disentanglement levels. We train three models with $\lambda=0,1,16$ while keeping all the other factors the same. The performance of these models on our validation set is shown in Table \ref{tab:sr_e2e}. The trend is obvious: the PSNR increases as the entanglement increases, which demonstrates the effectiveness of the joint CARSR method with a single-stage network. \begin{table} \caption{Ablation Study on joint end-to-end supervision. We introduce the explicit reconstruction loss as a disentanglement mechanism of CAR and SR. By changing the weight of this loss term, we can study the effect of different levels of joint-supervision. Among all the settings, the model trained without the reconstruction loss performs best on our validation set.} \resizebox{\columnwidth}{!}{% \begin{tabular}{l c c c} \hline Model & a & b & c \\ \hline Weight of reconstruction loss $\lambda$ & 16 & 1 & 0 \\ \hline PSNR (dB) & 27.507 & 27.627 & \textbf{27.672} \\ \hline \end{tabular} } \label{tab:sr_e2e} \end{table} \section{Conclusion} In this paper, we propose a single-stage network for the joint CARSR task to directly reconstruct an artifact-free high-resolution image from a compressed low-resolution input. To address the CARSR problem, we make use of the contextual information by introducing a specially designed ASPP that integrates both intra- and inter-block features. Our experiments illustrate the effectiveness and efficiency of our method with both standard test images and real-world images. Moreover, the extensive experimental results reveal a high potential for enhancing the performance of current methods for various high-level computer vision tasks, \eg { }real-scene resolution text recognition, and extremely tiny face detection. \section*{Acknowledgment} This research is supported by HP Inc., Palo Alto, CA. {\small \bibliographystyle{IEEEtran}
proofpile-arXiv_059-15743
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Counting microscopic degrees of freedom for extremal black holes in string theory is a useful probe into aspects of quantum gravity \cite{Strominger:1996sh}. For supersymmetric black holes, one should in principle be able identify the degrees of freedom both from the macroscopic solution as well as count them from the microscopic description of these black holes. The $1/4$ BPS dyonic black holes in ${\cal N}=4$ theory is a system which has been extensively studied in this context, see \cite{Sen:2007qy,Dabholkar:2012zz} for reviews. The identification of the degrees of freedom is complicated by the fact that classical solutions of black holes are multi-centered and usually they contain hair degrees of freedom localized outside the horizon \cite{Sen:2009vz,Banerjee:2009uk,Jatkar:2009yd}. The microscopic analysis counts all these configuations together. Let us make this precise, let $d_{\rm micro} (\vec q) $ be the degeneracy or in the case of the extremal supersymmetric black holes the appropriate supersymmetric index evaluated from the microscopic description of a BPS state with charge $\vec q$. Similarly let $d_{\rm macro} (\vec q) $ be the corresponding macroscopic index. Then \begin{eqnarray}\label{split} d_{\rm macro}( \vec q) = \sum_{n} \sum_{ \stackrel{ \{\vec q_i \} , \vec q_{\rm hair}} {\sum_{i=1}^n( \vec q_i + \vec q_{\rm hair} ) = \vec q} } \left( \prod_{i = 1}^n d_{\rm hor} ( \vec q_i ) \right) d_{\rm hair} ( \vec q_{\rm hair} ; \{\vec q_i \} ) \end{eqnarray} Each term on the right hand side of (\ref{split}) is the contribution to the index of say the $n$-centered black hole configuration. $d_{\rm hor} ( \vec q_i)$ is the contribution to the index from the horizon degrees of freedom with charge $q_i$ and $d_{\rm hair} ( \vec q_{\rm hair} ; \{\vec q_i \} )$ is the index of the hair carrying total charge $\vec q_{hair}$ of a $n$-centered black hole whose horizons carry charges $\vec q_1, \cdots \vec q_n$. We expect \begin{eqnarray}\label{basic} d_{\rm macro} (\vec q) = d_{\rm micro}( \vec q) . \end{eqnarray} It would simplify matters if we can restrict our attention to single centred black hole configurations. Then (\ref{split}) indicates that we would need to identify the hair to isolate the horizon degrees of freedom. Since we are dealing with $1/4$ BPS states in ${\cal N}=4$ theories, which break 12 supersymmetries, the degeneracy $d(\vec q)$ will refer to the index \begin{equation}\label{heltrace} B_6 = \frac{1}{6!}{\rm Tr} ( (2J)^6 (-1)^{2J} ) , \end{equation} where $J$ is the component of the angular momentum in say the $3$ direction. The factorized form of the Hilbert space corresponding to the hair degrees of freedom and the horizon degrees of freedom follows from the fact the these are well separated due the presence of an infinite throat \cite{Sen:2009vz}. The utility of identifying the horizon degrees of freedom lies in the fact that the horizon is spherically symmetric and therefore carries zero momentum $J=0$. The index taken over the horizon states reduces to $(-1)^{2J} d_{\rm hor} = d_{\rm hor}$, where $d_{\rm hor}$ is the total number of states associated with the horizon. Therefore the index of the horizon states must be a positive number. This leads to an important check on the microscopic counting and the equality (\ref{basic}). Once one determines the hair degrees of freedom for a given macroscopic black hole and factors them out of the index, what must remain is a positive number which counts the index of the horizon states. This argument clearly relies on what are the hair degrees of freedom and this in turn depends on the duality frame of the macroscopic solution. This prediction was tested in \cite{Sen:2010mz} with the assumption that there exists a frame in which the only hair degrees of freedom are the fermionic zero modes associated with the broken supersymmetry generators. For black holes in ${\cal N}=8$ there is evidence towards this fact in \cite{Chowdhury:2014yca,Chowdhury:2015gbk}. These authors worked in a frame in which the black hole configuration reduced to a system of only D-branes and showed the only hair degrees of freedom were the fermionic zero modes and the BPS configuration indeed had zero angular momentum. However such a frame has not yet been shown to exist for black holes in ${\cal N}=4$ theory. Given this situation, one way of proceeding is to evaluate the partition functions corresponding to the hair degrees of freedom and isolate the horizon degrees of freedom in a given frame. This has already been done in \cite{Banerjee:2009uk,Jatkar:2009yd}, for $1/4$ BPS dyons in the type IIB frame, but a test of positivity of the index for the resulting horizon degrees of freedom has not been done. We perform this analysis in this paper and indeed demonstrate that the $d_{\rm hor}$ is indeed positive. This is quite remarkable as we will see, since factorizing the hair degrees of freedom naively seems to introduce terms with negative contributions to the index. We adapt the proof of \cite{Bringmann:2012zr} for configurations with magnetic charge $P^2 =2$ and demonstrate that the index is positive. We then extend this observation to all the CHL models and to other orbifolds associated with Mathieu moonshine introduced in \cite{Chattopadhyaya:2017ews,Persson:2013xpa}. In \cite{Chattopadhyaya:2017ews,Chattopadhyaya:2018xvg} it was observed that for ${\cal N}=4$ models obtained by freely acting $\mathbb{Z}_2, \mathbb{Z}_3$ orbifolds of type IIB on $T^6$, the index for single centered configurations after factorising the sign due to the fermionic zero modes did not obey the expectation $d_{\rm hor}$ is positive \footnote{Please see tables \ref{qp0}, \ref{qp1}, \ref{qp2} reproducing this observation }. But as the above discussion shows, a possible reason for this could be that the assumption that there exists a frame in which the fermionic zero modes are the only hair degrees of freedom might not be true. Therefore we re-examine this question in this paper. Following the same procedure used in the CHL models we isolate the hair degrees of freedom in the type IIB frame. Then on examining the sign of the index for single centered black holes we observe that $d_{\rm hor}$ is positive. The organisation of the paper is as follows. In the section \ref{sechorstate} we briefly review the statements about the hair and the the partition function for the horizon degrees of freedom. for the $1/4$ BPS dyonic black hole in type IIB compactified on $K3\times T^2$. We then generalise this to all the CHL orbifolds as well as other orbifolds associated with Mathieu Moonshine. Finally we construct the partition function for the horizon states for the toroidal models obtained by freely acting $\mathbb{Z}_2, \mathbb{Z}_3$ orbifolds of type IIB on $T^6$. In section \ref{check} we perform a consistency check on the $d_{\rm hor}$ obtained. This check relies on the fact that the 5-dimensional BMPV black hole has the same near horizon geometry \cite{Banerjee:2009uk,Jatkar:2009yd}. Therefore $d_{\rm hor}$ for the BMPV black hole should agree with that of the $1/4$ BPS dyon. We show that this is indeed the case for all the examples. Finally in section \ref{signind}, armed with the $d_{\rm hor}$ for all the models we study the positivity of the index for single centered black holes for all the models. We have evaluated numerically the indices of horizon states for several charges in all the ${\cal N}=4$ models for which dyon partition functions are known which confirm that the index is positive. We adapt the proof of \cite{Bringmann:2012zr} to show that the index is positive for charge configurations with $P^2 = 2$. Section \ref{conclusions} contains our conclusions. \section{Horizon states for the $1/4$ BPS dyon} \label{sechorstate} In this section we construct the partition function for the horizon states for $1/4$ BPS dyons in ${\cal N}=4$ compactifications. This is done by identifying the `hair' degrees of freedom which are localized outside the horizon. Such a partition function for the horizon states was constructed for the canonical ${\cal N}=4$ theory obtained by compactifying type IIB string theory $K3\times T^2$ in \cite{Banerjee:2009uk,Jatkar:2009yd} in the type IIB frame. We review this in section and then extend the analysis for other ${\cal N}=4$ models. The ${\cal N}=4$ compactifications of interest are type IIB theory on $K3\times T^2/\mathbb{Z}_N$ where $\mathbb{Z}_N$ acts as an automorphisim $g'$ on $K3$ along with a shift of $1/N$ units on one of the circles of $T^2$. The action of $g'$ can be labelled by the $26$ conjugacy classes of the Mathieu group $M_{23}$. The classes $pA$ with $ p =2, 3, 5, 6, 7, 8 $ and the class $4B$ are called as Nikulin's automorphism of $K3$. They were first introduced in \cite{Chaudhuri:1995ve,Chaudhuri:1995dj} as models dual to heterotic string theory with ${\cal N}=4$ superysmmetry but with gauge groups with reduced from the maximal rank of $28$. All these compactifications admit $1/4$ BPS dyons, let $(Q, P)$ be the electric and magnetic charge vector of these dyons, then the $1/4$ BPS index $B_6$ is given by \cite{Dijkgraaf:1996it,Jatkar:2005bh,David:2006ji,David:2006yn,David:2006ud} \be \label{b6phi10} -B_6=\frac{1}{N}(-1)^{Q \cdot P +1}\int_{{\cal C}}{ d}\rho{ d} \sigma {d}v\; e^{-\pi i ( N \rho Q^2+\sigma P^2/N +2v Q\cdot P)}\frac{1}{\tilde\Phi_k(\rho,\sigma, v)}, \ee where ${\cal C}$ is a contour in the complex 3-plane defined by \begin{eqnarray}\label{contour} \rho_2 = M_1, \qquad \sigma_2 = M_2, \qquad \tilde v_2 = - M_3, \\ \nonumber 0\leq \rho_1 \leq 1, \qquad 0 \leq \tilde\sigma_1 \leq N, \qquad 0 \leq \tilde v_1 \leq 1. \end{eqnarray} Here $\rho= \rho_1 + i \rho_2, \sigma= \sigma_1 + i \sigma_2, v = v_1 + i v_2$ and $M_1, M_2, M_3$ are positive numbers, which are fixed and large and $M_3 << M_1, M_2$. The contour in (\ref{contour}) implies that we first expand in powers or $e^{2 \pi i \rho}, e^{2\pi i \sigma}$ and at the end perform the expansion in $e^{2\pi i v}$. The Siegel modular form $\tilde\Phi(\rho, \sigma, v) $ transforming under $Sp(2,\mathbb{Z})$, or its subgroups for $N>1$ admits an infinite product representation given by \bea \label{phi10} \tilde{\Phi}_k(\rho,\sigma,v)&=&e^{2\pi i(\rho+ \sigma/N+v)} \times \\ \nn && \prod_{r=0}^N\prod_{\begin{smallmatrix}k\in \mathbb{Z}+\frac{r}{N},\;l\in \mathbb{Z},\\ j\in \mathbb{Z}\\ k',l\geq 0;\;\; j<0 \;k'=l=0\end{smallmatrix}}(1-e^{2\pi i(k'\sigma+l\rho+jv)})^{\sum_{s=0}^{N-1} c^{(r,s)}(4kl-j^2)}. \eea The coefficients $c^{(r,s)}$ are determined from the expansion of the twisted elliptic genera for the various order $N$ orbifolds $g'$ of $K^3$. The twisted elliptic genus of $K3$ is defined by \begin{eqnarray} F^{(r, s) }( \tau, z) &=& \frac{1}{N} {\rm Tr}_{RR \;\; g^{\prime r} } \left[ ( -1)^{F_{K3} + \bar F_{K3} } g^{\prime s} e^{2\pi i z F_{K3} } q^{L_0 - \frac{c}{24} }\bar q ^{\bar L_0 - \frac{\bar c}{24} } \right], \\ \nonumber &=& \sum_{ j \in \mathbb{Z} , \; n \in \mathbb{Z}/N} c^{( r, s) } ( 4n - j^2) e^{2\pi i n \tau + 2\pi i j z}. \\\ \nonumber & & \qquad\qquad \qquad \qquad 0\leq r, s\leq N-1. \end{eqnarray} The trace is performed over the Ramond-Ramond sector of the ${\cal N}=(4, 4) $ super conformal field theory of $K3$ with $( c, \bar c) = ( 6, 6 )$, $F$ is the Fermion number and $j$ is the left moving $U(1)$ charge of the $SU(2)$ $R$-symmetry of $K3$. The twisted elliptic genera for the $g'$ corresponding to conjugacy classes of $M_{23}\cup M_{24}$ have been evaluated in \cite{Chattopadhyaya:2017ews}. These take the form \begin{eqnarray}\label{explielip} F^{(0, 0)} ( \tau, z) &=& \alpha_{g'}^{(0, 0)} A( \tau, z) , \\ \nonumber F^{(r, s) } ( \tau, z) &=& \alpha_{g'}^{(r, s) } A( \tau, z) + \beta_{g'}^{(r, s) }(\tau) B(\tau, z) , \\ \nonumber && \qquad\qquad r, s \in \{0, 1, \cdots N-1 \} \; \hbox{with} ( r, s) \neq (0, 0), \end{eqnarray} where \begin{eqnarray}\label{ab} A(\tau, z) &=& \frac{\theta_2^2(\tau,z)}{\theta_2^2(\tau, 0) }+\frac{\theta_3^2(\tau,z)}{\theta_3^2(\tau, 0)}+\frac{\theta_4^2(\tau,z)}{\theta_4^2(\tau, 0) }, \\ \nonumber B(\tau, z) &=& \frac{\theta_1^2(\tau, z) }{\eta^6(\tau) }. \end{eqnarray} The coefficients $\alpha_{g'}^{(r, s)}$ in (\ref{explielip}) are numerical constants, while $\beta_{g'}^{(r, s) }(\tau) $ are modular forms that transform under $\Gamma_0(N)$. For $g'$ corresponding to conjugacy classes of $M_{23}$, they can be read out from appendix E of \cite{Chattopadhyaya:2017ews}. For example, in the case of the $pA$ orbifolds with $p =1,2, 3, 5, 7$, they are given by \cite{David:2006ji}. \begin{eqnarray}\label{2atwist} F^{(0, 0)} &=& \frac{8}{N} A(\tau, z) , \\ \nonumber F^{(0, s)} &=& \frac{8}{(N+1)N} A(\tau, z) - \frac{2}{N+1} B(\tau, z) {\cal E}_N(\tau) , \\ \nonumber F^{(r,rk)} &=& \frac{8}{N(N+1)} A(\tau, z) + \frac{2}{N(N+1)} B(\tau, z) {\cal E}_N(\frac{\tau+k}{N} ) , \\ \nonumber {\cal E}_N(\tau) &=& \frac{12i}{\pi ( N-1) } \partial_\tau [ \ln \eta(\tau) - \ln \eta( N\tau) ]. \end{eqnarray} For $N$ composite corresponding to the classes $4B, 6A, 8A$, the strategy for construction of the twisted elliptic genus was first given in \cite{Govindarajan:2009qt} and it was worked out explicitly for the $4B$ example \footnote{Suresh Govindarajan informed us that the authors of \cite{Govindarajan:2009qt} also explicitly constructed all the sectors of the $6A$ and $8A$ twisted elliptic genera though it was not reported in the paper.}. The papers \cite{Cheng:2010pq,Eguchi:2010fg,Gaberdiel:2010ch} contain the twining characters, $F^{(0, s)}$ and \cite{Gaberdiel:2012gf} also contains the strategy to construct the twisted elliptic genera for other conjugacy classes of $M_{23}$ and a Mathematica code for generating the elliptic genera. The weight of the Siegel modular form $\tilde \Phi( \rho,\sigma, v) $ is given by \begin{eqnarray} k = \frac{1}{2} \sum_{s=0}^{N-1} c^{(0, s)} ( 0 ) . \end{eqnarray} For the classes $pA, p = 1, 2, 3, 5, 7, 11$ we have \begin{equation} k = \frac{24}{ p +1} - 2, \end{equation} for $4B, 6A, 8A$ we have $k = 3, 2, 1$ respectively and for $14A, 15A$ $k=0$. Finally, as discussed in the introduction the study of horizon states would be much simpler if one could focus on single centered dyons. Such a system would have only one horizon. The choice of the contour chosen in (\ref{contour}) together with some kinematic constraints on charges such as (\ref{keres}) ensures that we are in the attractor region of the axion-dilaton moduli and the index given by (\ref{b6phi10}) is that of single centred dyons \cite{Sen:2007vb,Sen:2010mz}. All the indices evaluated in this section paper is done using the contour (\ref{contour}). \subsection{The canonical example: $K3\times T^2$} In the work of \cite{Banerjee:2009uk,Jatkar:2009yd} the hair modes of the $1/4$ BPS dyonic black hole in type IIB theory compactified on $K3\times T^2$ were constructed. Here we briefly review this construction. These modes were shown to be deformations localized outside the horizon and they preserved supersymmetry. Let us first recall that the dyonic black hole in 4-dimensions is constructed by placing the $5$ dimensional BMPV black hole or the rotating D1-D5 system \cite{Breckenridge:1996is} in Taub-Nut space \cite{Gaiotto:2005gf}. The Taub-Nut space has the geometry which at the origin is $R^4$ but at infinity it is $R^3\times \tilde S $. The isometry along $S^1$ coincides with the angular direction the BMPV rotates. The hair modes arise from the collective modes of the D1-D5 system thought of as an effective string along say the $x^5$ and the time $t$ directions. Therefore these modes are oscillations of the effective string, they are left moving since they have to preserve supersymmetry \footnote{It is easy to see from the heterotic frame that only left moving oscillations preserve supersymmetry.} After allowing the fermionic zero modes associated with the $12$ broken susy generators to saturate $(2J)^6/6!$ in the helicity trace given in (\ref{heltrace}), the non-trivial hair modes consist of \begin{itemize} \item 4 left moving fermionic modes arising from the deformations of the gravitino giving rise to the contribution \begin{equation} Z_{{\rm hair}:1A}^{ 4d: f } = \prod_{l=1}^\infty ( 1- e^{2\pi i l \rho} )^4 \end{equation} \item 3 left moving bosonic modes associated with the oscillation of the effective string in the 3 transverse directions $\mathbb{R}^3$ as Taub-NUT is assymptotically $\mathbb{R}^3\times \tilde S^1$. \begin{equation} Z_{{\rm hair}: 1A}^{ 4d: { \perp} } = \prod_{l=1}^\infty \frac{1}{( 1- e^{ 2\pi i l \rho} ) ^3} \end{equation} \item 21 left moving bosonic modes, these arise from the deformations of the 21 anti-self-dual forms of type IIB on K3. These deformations involve $21$ scalar functions folded with the $2$ form $d\omega_{TN}$ on the Taub-Nut given by \begin{equation}\label{defhv} \delta H^s = h^s ( v) dv \wedge d\omega _{TN}, \qquad v = t + x^5, \quad s = 1, \cdots 21 \end{equation} Counting these oscillations we obtain \begin{equation} Z_{{\rm hair}: 1A}^{4d:{\rm asd} } = \prod_{l=1}^\infty \frac{1}{( 1- e^{ 2\pi i l \rho} ) ^{21}}. \end{equation} The $21$ anti-self dual forms arise from compactifying the RR 4-form on the $19$ anti-self dual $2$ form of the $K3$ together with the NS 2-form and the RR 2-form of type IIB. \end{itemize} Note that in the partition function we labelled the chemical potential to count the osciallations by $\rho$, this is because exciting these left moving momentum modes correspond to exciting the electric charge of the dyon \cite{David:2006yn}. Now combining these partition functions we obtain \begin{eqnarray} \label{k3h} Z_{ {\rm hair } : 1A}^{4d} &=& Z_{{\rm hair}:1A}^{ 4d: f }\times Z_{{\rm hair}: 1A}^{ 4d: { \perp} } \times Z_{{\rm hair}: 1A}^{4d:{\rm asd} } \\ \nonumber & =& \prod_{l=1}^{\infty}(1-e^{2\pi i (l\rho)})^{-20}. \end{eqnarray} The Bosonic hair partition function is given by \begin{equation} Z_{{\rm hair}: 1A }^{ \rm bosons}= Z_{{\rm hair}: 1A}^{ 4d: { \perp} } \times Z_{{\rm hair}: 1A}^{4d:{\rm asd} } = \frac{ e^{2\pi i \rho}}{ \eta^{24} ( \rho) }, \end{equation} this is identical to that of the counting the degeneracy of purely electric states in this model without the zero point energy. This observation will help in the generalizations to CHL models. To obtain the partition function of horizon states we factor out the hair degrees of freedom resulting in \begin{equation} Z_{{\rm hor}} = \frac{ 1}{ \Phi_{10} ( \rho, \sigma, v) Z_{ {\rm hair } : 1A}^{4d} }. \end{equation} The index for the horizon states can be then be obtained by extracting the Fourier coefficients using the expression given by \be \label{dhork3} d_{hor}=-(-1)^{Q \cdot P}\int_{{\cal C}}{ d}\rho{ d} \sigma { d}v\; e^{-\pi i (\rho Q^2+\sigma P^2+2v Q\cdot P)}\frac{1}{\tilde\Phi_{10}(\rho,\sigma, v)}\prod_{l=1}^{\infty}(1-e^{2\pi i (l\rho)})^{20}. \ee Here the contour ${\cal C}$ is same as that defined in (\ref{contour}). \subsection{Orbifolds of $K3\times T^2$} \subsubsection*{$2A$ orbfiold} Before we present the analysis for the most general orbifold, let us examine in detail the analysis for the $2A$ orbifold. In this case, the orbifold acts by exchanging $8$ pairs of anti-self dual $(1, 1)$ forms out of the $19$ anti-self dual forms of $K3$ with the $1/2$ shift on $S^1$ \cite{Chaudhuri:1995dj}. Note that because of the $1/2$ shift, the natural unit of momentum on $S^1$ is $N=2$. With this input we are ready to repeat the analysis for the partition function of the hair modes \begin{itemize} \item The 4 left moving fermionic modes arising from the deformations of the gravitino give rise to the contribution \begin{equation} Z_{{\rm hair}:2A}^{ 4d: f } = \prod_{l=1}^\infty ( 1- e^{4\pi i l \rho} )^4. \end{equation} Note that due to the fact that the periodicity is now $\frac{2\pi }{N}$, the unit of momentum is doubled. \item The $3$ transverse bosonic deformations along $R^3$ of the effective string results in \begin{equation} Z_{{\rm hair}: 2A}^{ 4d: { \perp} } = \prod_{l=1}^\infty \frac{1}{( 1- e^{ 4\pi i l \rho} ) ^3}. \end{equation} \item The action of the orbifold projects out $8$ anti-self dual forms. The analysis for $13 = 11 + 2$. \footnote {The 2 arises from the anti-self dual component of the RR 2-form and the NS 2-form.} invariant anti-self dual forms proceeds as before except for the fact that the unit of momentum is $2$ \begin{equation} Z_{{\rm hair}: 2A}^{4d:{\rm asd} } |_{\rm invariant} = \prod_{l=1}^\infty \frac{1}{( 1- e^{ 4\pi i l \rho} ) ^{13}}. \end{equation} Consider the following boundary conditions of the function $h(s)$ in (\ref{defhv}) for the $8$ projected anti-self dual forms. \begin{equation} h(v + \frac{2\pi}{N} ) = - h( v) , \qquad \qquad N=2. \end{equation} These deformation pick up sign when one move by $1/2$ unit on $S^1$. The partition function corresponding to these modes is given by \begin{equation} Z_{{\rm hair}: 2A}^{4d:{\rm asd} } |_{\rm twisted} = \prod_{l=1}^\infty \frac{1}{( 1- e^{ 2\pi i (2l -1) \rho} ) ^{8}}. \end{equation} Note that these modes are twisted for the circle of radius $2\pi/N, N =2$, they obey anti-periodic boundary conditions. However in supergravity periodicities are measured over the circle of radius $2\pi $ and they are periodic for this radius, therefore these modes can be counted as hair modes. Together, the contribution of the anti-self dual forms to the partition function is given by \begin{eqnarray} Z_{{\rm hair}: 2A}^{4d:{\rm asd} } &=& Z_{{\rm hair}: 2A}^{4d:{\rm asd} } |_{\rm invariant} \times Z_{{\rm hair}: 2A}^{4d:{\rm asd} } |_{\rm twisted} , \\ \nonumber &=& \prod_{l=1}^\infty \frac{1}{( 1- e^{ 4\pi i l \rho})^5} \prod_{l=1}^\infty \frac{1}{( 1- e^{ 2\pi i l \rho} ) ^{8}} \end{eqnarray} \end{itemize} Now combining all the hair modes we obtain \begin{eqnarray} Z_{ {\rm hair } : 2A}^{4d} &=& Z_{{\rm hair}:2A}^{ 4d: f }\times Z_{{\rm hair}: 2A}^{ 4d: { \perp} } \times Z_{{\rm hair}: 2A}^{4d:{\rm asd} } \\ \nonumber & =& \prod_{l=1}^{\infty}(1-e^{4\pi i l\rho)})^{-4} ( 1 - e^{2\pi i l \rho} )^{-8} . \end{eqnarray} Observe that the partition function of the bosonic hair modes is given by \begin{eqnarray} Z_{{\rm hair}: 2A }^{ 4d:\; b} &=& Z_{{\rm hair}: 2A}^{ 4d: { \perp} } \times Z_{{\rm hair}: 2A}^{4d:{\rm asd} } , \\ \nonumber &=& \prod_{l =1}^\infty ( 1- e^{ 4\pi i l \rho } )^{-8} ( 1- e^{2\pi i l \rho} )^{-8} , \\ \nonumber &=& \frac{e^{2\pi i \rho} }{ \eta^8 ( 2 \rho)\eta^{8} ( \rho) } . \end{eqnarray} This is the partition function of the fundamental string in the $N=2$ CHL orbifold of the heterotic theory with the zero point energy removed \cite{Dabholkar:2005by,David:2006yn}. \subsubsection*{$pA$ orbifolds $p = 2, 3, 5, 7$} The construction of the hair modes for the case of orbifolds of prime order, the method proceeds as discussed in detail for the $2A$ orbifold. In each case we need to count the number of $2$-forms which are left invariant and which pick up phases and evaluate the partition function. The result for the bosonic hair modes is given by \be\label{bosonic} Z_{{\rm hair} : \; pA }^{4d: \; b} =\prod_{l=1}^{\infty}\frac{1}{(1-e^{2\pi i \rho N l})^{k+2}(1-e^{2\pi i l\rho})^{k+2}}. \ee where \begin{equation} k = \frac{24}{ p +1} - 2. \end{equation} Note that this is the partition of the states containing only the electric charges or the fundamental string without the zero point energy \cite{David:2006yn}. Now including the $4$ fermionic deformations we obtain \begin{eqnarray}\label{chlh} Z_{{\rm hair}:\; pA}^{4d} &=&\prod_{l=1}^{\infty}(1-e^{2\pi i (Nl\rho)})^{-(k+2)}(1-e^{2\pi i (l\rho)})^{-(k+2)}(1-e^{2\pi i (Nl\rho)})^{4}\\ \nn &=& \prod_{l=1}^{\infty}(1-e^{2\pi i (Nl\rho)})^{-{2k}}\prod_{N\nmid l}(1-e^{2\pi i (l\rho)})^{-(k+2)} \end{eqnarray} It is useful to rewrite this expression as follows \begin{eqnarray}\label{chlall} Z_{{\rm hair}:\; pA}^{4d}&=&\prod_{l=1}^{\infty} (1-e^{2\pi i (Nl\rho)})^{-\sum c^{(0,s)}(0)}\prod_{N\nmid l}(1-e^{2\pi i (l\rho)})^{-\sum e^{-2\pi i sl/N}c^{(0,s)}(0)}\\ \nn &=& \prod_{l\ne 0}(1-e^{2\pi i (l\rho)})^{-\sum e^{-2\pi i sl/N}c^{(0,s)}(0)} \end{eqnarray} The sum is on the range of $s=0$ to $N-1$ and $N\nmid l$ implies $N$ does not divide $l$. The values of $\sum_{s=0}^{N-1}e^{-2\pi i s l/N}c^{(0,s)}(-b^2)$ for prime $N$ are listed in table \ref{tablep} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|} \hline & & & \\ $N$ & $l$ & $-b^2$ & $\sum_{s=0}^{N-1}e^{-2\pi i sl/N}c^{(0,s)}(-b^2)$\\ & & & \\ \hline & & & \\ $p$ & $N|l$ & 0 & $2k=\frac{48}{N+1}-4$\\ & & & \\ & & $-1$ & 2\\ \cline{2-4} & & & \\ & $N\nmid l$ & 0 & $k+2=\frac{24}{N+1}$\\ & & & \\ & & $-1$ & 0\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Values of $\sum_{s=0}^{N-1}e^{-2\pi i s l/N}c^{(0,s)}(-b^2)$ for orbifolds of $K3$ with prime order \\ $(N=p)$} \label{tablep} \renewcommand{\arraystretch}{0.5} \end{table} \subsubsection*{Orbifolds of composite order: $4B, 6A, 8A$ } One can count the hair modes in a similar fashion as the orbifolds of prime order. The only difference would arise for the bosonic modes $Z_{\rm hair}^{\rm bosons}$, which needs to be replaced by the fundamental string in these theories without the zero point energy. Including the 4 fermionic hairs, we see that the answer can be written in the same form as that seen for orbifolds with prime order. \begin{eqnarray} Z_{{\rm hair}: {\rm CHL}}^{4d} &=&\prod_{l=1}^{\infty}(1-e^{2\pi i (Nl\rho)})^{-\sum c^{(0,s)}(0)}\prod_{N\nmid l}(1-e^{2\pi i (l\rho)})^{-\sum e^{-2\pi i sl/N}c^{(0,s)}(0)}. \end{eqnarray} The sum ranges from $s=0$ to $N-1$. This can be rewritten as \begin{eqnarray} Z_{{\rm hair}: {\rm CHL}}^{4d}&=&\prod_{l=1}^{\infty}(1-e^{2\pi i (l\rho)})^{-\sum e^{-2\pi i sl/N}c^{(0,s)}(0)}. \end{eqnarray} For the geometric CHL orbifolds , we list $\sum_{s=0}^{N-1}e^{-2\pi i s l/N}c^{(0,s)}(-b^2)$ for different $N=4,6,8$ in table \ref{tablenonp} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|} \hline & & & \\ $N$ & $l$ & $-b^2$ & $\sum_{s=0}^{N-1}e^{-2\pi i sl/N}c^{(0,s)}(-b^2)$\\ & & & \\ \hline & & & \\ $4$ & $4|l$ & 0 & $6$\\ & & & \\ & & $-1$ & 2\\ \cline{2-4} & & & \\ & $2| l,\; 4\nmid l$ & 0 & 6\\ & & & \\ \cline{2-4} & & & \\ & $2\nmid l$ & 0 & 4\\ & & & \\ \hline \hline & & & \\ $6$ & $6|l$ & 0 & $4$\\ & & & \\ & & $-1$ & 2\\ \cline{2-4} & & & \\ & $2| l,\; 6\nmid l$ & 0 & 4\\ & & & \\ \cline{2-4} & & & \\ & $3| l,\; 6\nmid l$ & 0 & 4\\ & & & \\ \cline{2-4} & & & \\ & $2\nmid l, 3\mid l$ & 0 & 2\\ & & & \\ \hline \hline & & & \\ $8$ & $8|l$ & 0 & $2$\\ & & & \\ & & $-1$ & 2\\ \cline{2-4} & & & \\ & $2| l,\; 4\nmid l$ & 0 & 3\\ & & & \\ \cline{2-4} & & & \\ & $4| l,\; 8\nmid l$ & 0 & 4\\ & & & \\ \cline{2-4} & & & \\ & $2\nmid l$ & 0 & 2\\ & & & \\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Values of $\sum_{s=0}^{N-1}e^{-2\pi i s l/N}c^{(0,s)}(-b^2)$ for non-prime CHL orbifolds of $K3$. $\sum_{s=0}^{N-1}e^{-2\pi i s l/N}c^{(0,s)}(-1)=0$ if $N\nmid l$ for any of these cases.} \label{tablenonp} \renewcommand{\arraystretch}{0.5} \end{table} Using the data from table \ref{tablenonp} we obtain \begin{eqnarray}\label{chlcom} Z_{{\rm hair}: 4B}^{4d} &=&\prod_{l=1}^{\infty}(1-e^{2\pi i (4l\rho)})^{4} (1-e^{2\pi i (4l\rho)})^{-4}(1-e^{2\pi i (2l\rho)})^{-2}(1-e^{2\pi i (l\rho)})^{-4}\\ \nn &=& \prod_{l=1}^{\infty}(1-e^{2\pi i (2l\rho)})^{-2}(1-e^{2\pi i (l\rho)})^{-4}\\ \nn Z_{{\rm hair}: 6A}^{4d}&=&\prod_{l=1}^{\infty}(1-e^{2\pi i (6l\rho)})^{4} (1-e^{2\pi i (6l\rho)})^{-2}(1-e^{2\pi i (2l\rho)})^{-2}(1-e^{2\pi i (3l\rho)})^{-2}(1-e^{2\pi i (l\rho)})^{-2}\\ \nn &=&\prod_{l=1}^{\infty}(1-e^{2\pi i (6l\rho)})^{2} (1-e^{2\pi i (2l\rho)})^{-2}(1-e^{2\pi i (3l\rho)})^{-2}(1-e^{2\pi i (l\rho)})^{-2}\\ \nn Z_{{\rm hair}: 8A}^{4d}&=&\prod_{l=1}^{\infty}(1-e^{2\pi i (8l\rho)})^{4} (1-e^{2\pi i (8l\rho)})^{-2}(1-e^{2\pi i (2l\rho)})^{-1}(1-e^{2\pi i (4l\rho)})^{-1}(1-e^{2\pi i (l\rho)})^{-2}\\ \nn &=&\prod_{l=1}^{\infty}(1-e^{2\pi i (8l\rho)})^{2} (1-e^{2\pi i (2l\rho)})^{-1}(1-e^{2\pi i (4l\rho)})^{-1}(1-e^{2\pi i (l\rho)})^{-2}. \end{eqnarray} \subsubsection*{Horizon states} We factor out the hair degrees of freedom to obtain the horizon states, this is given by \be Z_{{\rm hor}:{\rm CHL} }^{4d}=-\frac{1}{\tilde\Phi_k(\rho,\sigma, v)}\prod_{l=1}^{\infty}(1-e^{2\pi i (l\rho)})^{\sum_s e^{-2\pi i sl/N}c^{(0,s)}(0)} \ee It is useful to use the product form of $\tilde\Phi_k$ given in (\ref{phi10}) to rewrite the partition function of the horizon states as follows \begin{eqnarray} \label{horparti} Z_{{\rm hor}:{\rm CHL} }^{4d} &=& -e^{-2\pi i(\rho+\sigma/N+v)} \prod_{r=0}^{N-1}\prod_{\begin{smallmatrix}k'\in \mathcal{Z}+r/N,l\in \mathcal{Z},\\ j\in \mathcal{Z}\\ k'>0,l\ge 0\end{smallmatrix}}(1-e^{2\pi i(k'\sigma+ l\rho+jv)})^{-\sum_s e^{-2\pi isl/N}c^{(r,s)}(4k'l-j^2)} \nn \\ &&\times \prod_{l=1}^{\infty}(1-e^{2\pi i (Nl\rho+v)})^{-2}\prod_{l=1}^{\infty}(1-e^{2\pi i (Nl\rho-v)})^{-2} (1-e^{-2\pi i v})^{-2}\\ \nn &=&-e^{-2\pi i(\rho+\sigma/N)} \prod_{r=0}^{N-1} \prod_{\begin{smallmatrix}k'\in \mathcal{Z}+r/N,l\in \mathcal{Z},\\ j\in \mathcal{Z}\\ k' >0,l\ge 0\end{smallmatrix}}(1-e^{2\pi i(k'\sigma+ l\rho+jv)})^{-\sum_s e^{-2\pi isl/N}c^{(r,s)}(4k'l-j^2)}\\ \nn &&\times \prod_{l=1}^{\infty}(1-e^{2\pi i (Nl\rho+v)})^{-2}\prod_{l=1}^{\infty}(1-e^{2\pi i (Nl\rho-v)})^{-2} (e^{\pi i v}-e^{-\pi i v})^{-2}. \end{eqnarray} This form of the horizon partition function is useful in the next section. The index for the horizon states is given by \begin{eqnarray} d_{{\rm hor}:{\rm CHL} }&=& -(-1)^{Q \cdot P}\int_{{\cal C}}{d}\rho{ d} \sigma { d}v\; e^{-\pi i (N\rho Q^2+\sigma P^2/N+2v Q\cdot P)} \frac{1}{\tilde\Phi_k(\rho,\sigma, v)}\times \nonumber \\ & & \qquad\qquad\qquad \prod_{l=1}^{\infty}(1-e^{2\pi i (l\rho)})^{\sum_s e^{-2\pi i sl/N}c^{(0,s)}(0)}. \end{eqnarray} \subsubsection*{Non-geometric orbifolds: $11A, 14A, 15A, 23A$ } For completeness we note that we can extend the counting of hair modes to $g'$ orbifolds of $K3$ where $g'$ corresponds all the remaining conjugacy classes of $M_{23}$. The CHL orbifolds also form a part of these, however the ones discussed in this section are non-geometric. The hair modes in these cases can also be written as: \begin{eqnarray} Z_{{\rm hair}: g'}^{4d}&=& \prod_{l\ne 0}(1-e^{2\pi i (l\rho)})^{-\sum e^{-2\pi i sl/N}c^{(0,s)}(0)} \end{eqnarray} To be explicit, we list list of values of $\sum_{s=0}^{N-1}e^{-2\pi i s l/N}c^{(0,s)}(-b^2)$ for different $N$ for $N=11,14,15,23$ in table \ref{tablenong}. \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|} \hline & & & \\ $N$ & $l$ & $-b^2$ & $\sum_{s=0}^{N-1}e^{-2\pi i sl/N}c^{(0,s)}(-b^2)$\\ & & & \\ \hline & & & \\ $11$ & $11|l$ & 0 & $0$\\ & & & \\ & & $-1$ & 2\\ \cline{2-4} & & & \\ & $11\nmid l$ & 0 & 2\\ & & & \\ \hline \hline & & & \\ $14$ & $14|l$ & 0 & $0$\\ & & & \\ & & $-1$ & 2\\ \cline{2-4} & & & \\ & $2| l,\; 7\nmid l$ & 0 & 2\\ & & & \\ \cline{2-4} & & & \\ & $7| l,\; 2\nmid l$ & 0 & 2 \\ & & & \\ \cline{2-4} & & & \\ & $2\nmid l, 7\nmid l$ & 0 & 1\\ & & & \\ \hline \hline & & & \\ $15$ & $15|l$ & 0 & $0$\\ & & & \\ & & $-1$ & 2\\ \cline{2-4} & & & \\ & $3| l,\; 5\nmid l$ & 0 & 2\\ & & & \\ \cline{2-4} & & & \\ & $5| l,\; 3\nmid l$ & 0 & 2\\ & & & \\ \cline{2-4} & & & \\ & $3\nmid l, 5\nmid l$ & 0 & 1\\ & & & \\ \hline \hline & & & \\ $23$ & $23|l$ & 0 & $-2$\\ & & & \\ & & $-1$ & 2\\ \cline{2-4} & & & \\ & $23\nmid l$ & 0 & 1\\ & & & \\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Values of $\sum_{s=0}^{N-1}e^{-2\pi i s l/N}c^{(0,s)}(-b^2)$ for non-geometric orbifolds of $K3$ where $g'\in [M_{23}]$. $\sum_{s=0}^{N-1}e^{-2\pi i s l/N}c^{(0,s)}(-1)=0$ if $N\nmid l$ for any of these cases.} \label{tablenong} \renewcommand{\arraystretch}{0.5} \end{table} Using the results from table \ref{tablenong} we write: \begin{eqnarray}\label{m23} Z_{{\rm hair}: 11A}^{4d}&=&\prod_{l=1}^{\infty}(1-e^{2\pi i (11l\rho)})^{4}(1-e^{2\pi i (l\rho)})^{-2}(1-e^{2\pi i (11l\rho)})^{-2}\\ Z_{{\rm hair}:14A}^{4d}&=&\prod_{l=1}^{\infty}(1-e^{2\pi i (14l\rho)})^{4}(1-e^{2\pi i (14l\rho)})^{-1}\\ \nn &&(1-e^{2\pi i (2l\rho)})^{-1}(1-e^{2\pi i (7l\rho)})^{-1}(1-e^{2\pi i (l\rho)})^{-1}\\ Z_{{\rm hair}:15A}^{4d}&=&\prod_{l=1}^{\infty}(1-e^{2\pi i (15l\rho)})^{4}(1-e^{2\pi i (15l\rho)})^{-1}\\ \nn &&(1-e^{2\pi i (3l\rho)})^{-1}(1-e^{2\pi i (5l\rho)})^{-1}(1-e^{2\pi i (l\rho)})^{-1}\\ Z_{{\rm hair}: 23A}^{4d}&=&\prod_{l=1}^{\infty}(1-e^{2\pi i (23l\rho)})^{4}(1-e^{2\pi i (23l\rho)})^{-1} (1-e^{2\pi i (l\rho)})^{-1} \end{eqnarray} The partition function of the horizon states in these models are given by the same expressions as in (\ref{horparti}) with $N$ replaced by the order of the conjugacy class and the coefficients $c^{(r, s)}$ read out out from the respective twisted elliptic genus. Let us conclude by writing the general formula for the horizon states as \begin{eqnarray}\nn Z_{\rm{hor}: \; g'}^{4d}&=&-e^{-2\pi i(\rho+\sigma/N)} \prod_{r=0}^{N-1} \prod_{\begin{smallmatrix}k'\in \mathcal{Z}+r/N,l\in \mathcal{Z},\\ j\in \mathcal{Z}\\ k' >0,l\ge 0\end{smallmatrix}}(1-e^{2\pi i(k'\sigma+ l\rho+jv)})^{-\sum_s e^{-2\pi isl/N}c^{(r,s)}(4k'l-j^2)}\\ \label{horparti2} &&\times \prod_{l=1}^{\infty}(1-e^{2\pi i (Nl\rho+v)})^{-2}\prod_{l=1}^{\infty}(1-e^{2\pi i (Nl\rho-v)})^{-2} (e^{\pi i v}-e^{-\pi i v})^{-2} \end{eqnarray} \subsection{Toroidal orbifolds} In this section we construct the hair for ${\cal N}=4$ theories obtained by freely acting $\mathbb{Z}_2, \mathbb{Z}_3$ involutions on $T^6$ \cite{Sen:1995ff}. Let us first briefly recall how these are constructed. In the type IIB frame, they are obtained by $4$ of the co-ordinates together with a half shift along one of the $S^1$. The type IIA description of the theory is that of a freely acting orbifold with the action of $(-1)^{F_L}$ and a $1/2$ shift along one of the circles of $T^6$.\footnote{For details of these descriptions and the dyon configuration refer \cite{David:2006ji}.} A similar compactification of order 3 given by a $2\pi/3$ rotation along one 2D plane of $T^4$ and a $-2\pi/3$ rotation along another plus an $1/3$ shift along one of the circles of $T^2$ was also discussed in \cite{David:2006ji}. We call these models $\mathbb{Z}_2$ and $\mathbb{Z}_3$ toroidal orbifolds. One key property of these models to keep in mind which will be important is that the breaking of the 32 supersymmetries of type IIB to 16 is determined by the size of $S^1$. This was not the case for the orbifolds of $K3\times T^2$, where supersymmetry was broken by the $K3$. For the toroidal models if the size of $S^1$ is infinite, the theory effectively behaves as though the theory has 32 supersymmetries. We will use this fact to propose certain fermionic zero modes which were present for the CHL models will become singular at the horizon. The dyon partition function for the toroidal models is given by \citep{David:2006ru}: \begin{eqnarray}\label{siegform2} \tilde{\Phi}_k(\rho,\sigma,v)&=&e^{2\pi i(\rho+ v)}\\ \nn &&\prod_{b=0,1}\prod_{r=0}^{N-1} \prod_{\begin{smallmatrix}k'\in \mathbb{Z}+ \frac{r}{N},l\in \mathcal{Z},\\ j\in 2\mathbb{Z}+b\\ k',l\geq0, \; j<0\; k'=l=0\end{smallmatrix}} (1-e^{2\pi i(k'\sigma+l\rho+jv)})^{\sum_{s=0}^{N-1}e^{2\pi isl/N}c_b^{r,s}(4k'l-j^2)}. \end{eqnarray} The coefficients $c^{(r,s)}$ are read out from the following twisted elliptic genus for $\mathbb{Z}_2$ orbifold: \begin{eqnarray}\label{2tortwist} F^{(0, 0)} &=& 0 , \\ \nonumber F^{(0, 1)} &=& \frac{8}{3} A(\tau, z) - \frac{4}{3} B(\tau, z) {\cal E}_2(\tau) , \\ \nonumber F^{(1, 0)} &=& \frac{8}{3} A(\tau, z) + \frac{2}{3} B(\tau, z) {\cal E}_2(\frac{\tau}{2} ), \\ \nonumber F^{(1, 1)} &=& \frac{8}{3}A(\tau, z) + \frac{2}{3} B(\tau, z) {\cal E}_2( \frac{\tau +1}{2} ). \end{eqnarray} The corresponding Siegel form of weight $k =2$ can be written as \begin{equation}\label{phitwo} \tilde \Phi_2 (\rho, \sigma, v) = \frac{ \tilde \Phi_{6}^2(\rho, \sigma, v) }{\tilde \Phi_{10}(\rho, \sigma, v) }, \end{equation} where $\tilde\Phi_6$ is the weight $6$ Siegel modular form associated with the order 2 CHL orbifold. For the $\mathbb{Z}_3$ toroidal case the twisted elliptic genus is given by \begin{eqnarray} \label{3tortwist} F^{(0, 0)} &=& 0 \\ \nonumber F^{(0,s)}&=&A(\tau,z)-\frac{3}{4}B(\tau,z){\cal E}_3(\tau) \\ \nonumber F^{(r,rk)}&=&A(\tau,z)+\frac{1}{4}B(\tau,z){\cal E}_3(\frac{\tau+k}{3}), \quad {r=1,2}. \end{eqnarray} The Siegel modular form associated with the $\mathbb{Z}_3$ toroidal orbifold has weight $k=1$ and is given by \begin{equation}\label{phi1} \tilde \Phi_1 (\rho, \sigma, v) = \frac{\tilde\Phi_4^{3/2} (\rho, \sigma, v)}{\tilde\Phi_{10}^{1/2}(\rho, \sigma, v) }, \end{equation} where $\tilde\Phi_4$ is the weight $4$ Siegel modular form associated with the order 3 CHL orbifold. Let us construct the hair modes and horizon states for these models. \subsubsection*{$T^6/\mathbb{Z}_2$ model} \begin{itemize} \item Just as in the case of the CHL models, we have $4$ left moving fermions. This gives rise to \begin{equation} Z_{{\rm hair}: T^6/\mathbb{Z}_2}^{4d: f }= \prod_{l =1}^\infty ( 1- e^{2\pi i (2l) \rho})^4. \end{equation} \item The deformations corresponding to the motion of the effective string in the $3$ transverse directions of $R^3\times \tilde S^1$ of the Taub-Nut space together with the fluctuations of the anti-self dual forms can be determined easily by examining the partition function of the fundamental string in this theory and removing the zero point energy. This partition function was determined in \cite{David:2006ji}, using this result we obtain \footnote{One can also obtain this by counting the number of invariant $2$-forms and the forms which pick up a phase as done in \cite{David:2006ud}. } \begin{equation} Z_{{\rm hair}: T^6/\mathbb{Z}_2}^{4d\; b} = \prod_{l =1}^\infty \left[ ( 1- e^{2\pi i (2l -1) \rho})^{8 } ( 1- e^{4\pi i l \rho})^{-8 } \right]. \end{equation} \item Contribution of the zero modes: The quantum mechanics of the bosonic zero modes describing the motion of the D1-D5 system in the Taub-Nut result in the following partition function \cite{David:2006yn} \begin{equation}\label{torzero} Z_{{\rm hair}: T^6/\mathbb{Z}_2}^{4d:\;{\rm zero modes}} =- e^{2\pi i v} ( 1- e^{2\pi i v} )^{-2} . \end{equation} For orbifolds of $K3$, this contribution from the bosonic zero modes was cancelled by the zero modes of $4$ fermions from the right moving sector carrying angular momentum $J= \pm \frac{1}{2}$ whose partition function is given by $-( e^{\pi i v} - e^{-\pi i v})^2$ \cite{Banerjee:2009uk}. However for the toroidal model, we propose that these zero modes do not form part of the hair. They are either singular at the horizon or they are not localized outside the horizon. This is possible, the fact that we are in a theory with $16$ supersymmetries is tied to the the radius of $S^1$. Verification of this proposal would involve a detailed study of the zero mode wave functions which we leave for the future. However we will perform consistency checks of this proposal in section \ref{signind}. by evaluating the index of the horizon states. \end{itemize} Thus the hair modes of the $\mathbb{Z}_2$ toroidal model is given by \begin{equation}\label{4dtorhair1} Z_{{\rm hair}: T^6/\mathbb{Z}_2}^{4d} = -( e^{\pi i v} - e^{-\pi i v})^{-2} \prod_{l =1}^\infty \left[ ( 1- e^{2\pi i (2l -1) \rho})^{8 } ( 1- e^{4\pi i l \rho})^{-4 } \right]. \end{equation} The partition function of the horizon states of this model are given by \begin{equation} Z_{{\rm hor}: T^6/\mathbb{Z}_2}^{4d} = -\frac{1}{\tilde \Phi_2( \rho, \sigma, v) Z_{ {\rm hair}: T^6/\mathbb{Z}_2}^{4d} }. \end{equation} where $\tilde\Phi_2(\rho, \sigma, v) $ is given in (\ref{phitwo}) or (\ref{siegform2}). The toroidal model has another special feature, they admit Wilson lines along $T^4$ \cite{David:2006ru}, their partition function is given by \begin{eqnarray}\nn Z_{{\rm Wilson}: T^4/\mathbb{Z}_2} =\prod_{l=1}^\infty \left[ (1- e^{2\pi i (2l -1) \rho + 2\pi i v })^2 (1- e^{2\pi i (2l -1) \rho - 2\pi i v })^2 (1- e^{2\pi i (2l -1) \rho })^{-4} \right]\\ \label{wilson1}. \end{eqnarray} It is possible that the Wilson lines might also be part of the hair modes. In section \ref{signind} we will see that including the Wilson lines as hair modes instead of the bosonic zero modes given in (\ref{torzero}) does not preserve the positivity of the index of the horizon states. \subsubsection*{$T^6/\mathbb{Z}_3$ model} Performing the same analysis as done for the $\mathbb{Z}_2$ orbfiold we obtain the following partition function for the hair modes. \begin{equation}\label{4dtorhair2} Z_{\rm{hair}: T^6/\mathbb{Z}_3}^{4d} = -(e^{\pi i v}-e^{-\pi i v})^{-2}\prod_{l=1}^\infty \left[ \frac{ (1-e^{2\pi i (3l-1)\rho})^{3} (1-e^{2\pi i (3l-2)\rho})^{3}} {(1-e^{2\pi i (3l)\rho})^{-2}} \right]. \end{equation} The horizon states is given by \begin{equation} Z_{\rm{hor}: T^6/\mathbb{Z}_3}^{4d} = -\frac{1}{\tilde \Phi_1(\rho, \sigma, v) Z_{\rm{hair}: T^6/\mathbb{Z}_3}^{4d} } , \end{equation} where $\tilde\Phi_1$ is given by (\ref{phi1}) or (\ref{siegform2}). For reference we also provide the partition function of the Wilson lines in this model \begin{eqnarray}\label{wilson2} &&Z_{{\rm Wilson}: T^4/\mathbb{Z}_3} = \nonumber \\ \nonumber &&\prod_{l =1}^\infty \left[ \frac{ ( 1-e^{2\pi i ( ( 3l -1)\rho + v)} ) ( 1-e^{2\pi i ( ( 3l -2)\rho + v)} ) ( 1-e^{2\pi i ( ( 3l -1)\rho - v) } ) ( 1-e^{2\pi i ( ( 3l -2)\rho - v) } ) } {( 1-e^{2\pi i ( ( 3l -1)\rho )} )^2 ( 1-e^{2\pi i ( ( 3l -2)\rho )} )^2 } \right] \nonumber \\ \end{eqnarray} From the expression for the Wilson lines and the infinite product representation given for $\tilde \Phi_k $ given in (\ref{siegform2}) we obtain the following useful expression for the partition function for the horizon modes for both the toroidal orbifolds. \begin{eqnarray} \nonumber && Z_{\rm{hor}; \; T^6/\mathbb{Z}_N}^{4d} =e^{-2\pi i \rho} \prod_{r=0}^{N-1} \prod_{\begin{smallmatrix}k\in \mathbb{Z}+r/N,l\in \mathbb{Z},\\ j\in \mathbb{Z}\\ k>0,l\ge 0\end{smallmatrix}}(1-e^{2\pi i(k\sigma+ l\rho+jv)})^{-\sum_s e^{-2\pi isl/N}c^{(r,s)}(4kl-j^2)} \\ &&\times \prod_{l=1}^{\infty} \left[ (1-e^{2\pi i (Nl\rho+v)})^{-2} (1-e^{2\pi i (Nl\rho-v)})^{-2} \right] ( e^{\pi i v} - e^{-\pi i v}) ^2 \times Z_{{\rm Wilson}: T^4/\mathbb{Z}_N}. \nonumber \\ \label{4dhortor} \end{eqnarray} \section{Horizon states for the BMPV black hole}\label{check} We now examine the BMPV black hole in 5 dimensions, that is the transverse space now does not have the Taub-Nut solution. The main reason for studying the problem in 5 dimensions is that the near horizon geometry of the BMPV black hole in 5 dimensions is same as the of the $1/4$ BPS dyon in 4 dimensions. This implies that the partition function of the horizon states of these 2 systems should be identical. In this section we construct the partition function of the hair and the horizon states for the BMPV black hole in type IIB on $K3\times S^1/g'$ as well as toroidal orbifolds of $T^5$. Here $g'$ corresponds to all the conjugacy classes of $M_{23}$. \subsection{Partition function of BMPV black holes} The partition function for these black holes in the canonical compactification $K3\times S^1$, was constructed in \cite{Banerjee:2009uk}. The same analysis can be extended to all the CHL models. The partition function receives contributions from the following sectors. \begin{itemize} \item The bound states of the D1-D5 system, this is given by the elliptic genus of the symmetric product of $K3/{g'}$. This contribution was evaluated in \cite{David:2006yn}. It is given by \begin{eqnarray} Z^{5d}_{S^N K3/g'}= e^{-2\pi i\sigma/N} \prod_{r=0}^{N-1} \prod_{\begin{smallmatrix}k\in \mathbb{Z}+r/N,l\in \mathbb{Z},\\ j\in \mathbb{Z}\\ k>0,l\ge 0\end{smallmatrix}}(1-e^{2\pi i(k\sigma+ l\rho+jv)})^{-\sum_s e^{-2\pi isl/N}c^{(r,s)}(4kl-j^2)}. \nonumber \\ \end{eqnarray} \item The centre of mass motion of the D1-D5 system in flat space. The degrees of freedom consist of $4$ bosons and $4$ fermions. 2 pairs of bosons carry the angular momentum $J=\pm 1$. \cite{Banerjee:2009uk}. \begin{equation} Z^{5d}_{{\rm c.o.m}} = \prod_{l=1}^{\infty} \left[ (1-e^{2\pi i (Nl\rho+v)})^{-2}(1-e^{2\pi i (Nl\rho-v)})^{-2} (1-e^{2\pi i Nl\rho})^4 \right]. \end{equation} Note that the only difference from the canonical model is that the unit of momentum on $S^1$ is $N$ due to the $1/N$ shift. \item $4$ right chiral zero modes which contribute as $(-1)^J e^{2\pi J}$ which contribute in pairs with $J = \pm\frac{1}{2}$ \begin{equation} Z^{5d}_{{\rm zero modes}} = - ( e^{\pi i v} - e^{-\pi i v} )^2. \end{equation} \item A shift of $e^{-2\pi i \rho}$ to ensure to take into account of the difference in the electric charge measured at infinity and the horizon \cite{Banerjee:2009uk}. \end{itemize} Combining all the sectors we obtain the following expression for the partition function for BMPV black hole for all orbifolds of $K3\times S^1$. \begin{eqnarray}\nn Z^{5d}_{g'} &=& -e^{-2\pi i(\rho+\sigma/N)} \prod_{r=0}^{N-1} \prod_{\begin{smallmatrix}k\in \mathcal{Z}+r/N,l\in \mathcal{Z},\\ j\in \mathcal{Z}\\ k>0,l\ge 0\end{smallmatrix}}(1-e^{2\pi i(k\sigma+ l\rho+jv)})^{-\sum_s e^{-2\pi isl/N}c^{(r,s)}(4kl-j^2)}\\ \nn && \times \prod_{l=1}^{\infty} \left[ (1-e^{2\pi i (Nl\rho+v)})^{-2}(1-e^{2\pi i (Nl\rho-v)})^{-2} (e^{\pi i v}-e^{-\pi i v})^{2} (1-e^{2\pi i Nl\rho})^4. \right] \\ \label{d5d} \end{eqnarray} Here the coefficients $c^{(r, s)}$ have to be read out from the twisted elliptic genus of $K3$ by $g'$ corresponding to the conjugacy classes of $M_{23}$. Using the counting of states for the dyon partition function done in \cite{David:2006ru} we can extend the analysis to the toroidal models. We present the analysis in some detail for the $T^6/\mathbb{Z}_2$ model Here the contributions arise from the following: \begin{itemize} \item The bound state of the D1-D5 system on the $T^4/\mathbb{Z}_2$ orbifold is given by \begin{eqnarray} Z^{5d}_{S^N T^4/\mathbb{Z}_2 } &=& \prod_{r=0}^{N-1} \prod_{\begin{smallmatrix}k\in \mathbb{Z}+r/N,l\in \mathbb{Z},\\ j\in \mathbb{Z}\\ k>0,l\ge 0\end{smallmatrix}}(1-e^{2\pi i(k\sigma+ l\rho+jv)})^{-\sum_s e^{-2\pi isl/N}c^{(r,s)}(4kl-j^2)}\nonumber \\ && \qquad\qquad\quad N=2 \end{eqnarray} Here the coefficients $c^{(r, s)}$ are read out from the expansion of the functions given in (\ref{2tortwist}). \item The contribution of the Wilson lines on $T^4/\mathbb{Z}_2$ which is given by \begin{eqnarray} Z^{5d}_{{\rm Wilson}: T^4/\mathbb{Z}_2} =\prod_{l=1}^\infty \left[ (1- e^{2\pi i (2l -1) \rho + 2\pi i v })^2 (1- e^{2\pi i (2l -1) \rho - 2\pi i v })^2 (1- e^{2\pi i (2l -1) \rho })^{-4} \right]\nonumber \\ \end{eqnarray} \item The partition function corresponding to the centre of mass motion of the D1-D5 system in the transverse space \begin{equation} Z^{5d}_{{\rm c.o.m}} = \prod_{l=1}^{\infty} \left[ (1-e^{2\pi i (Nl\rho+v)})^{-2}(1-e^{2\pi i (Nl\rho-v)})^{-2} (1-e^{2\pi i Nl\rho})^4 \right], \qquad N=2. \end{equation} \item The contribution of the zero modes \begin{equation} Z^{5d}_{{\rm zero modes}} = - ( e^{\pi i v} - e^{-\pi i v} )^2. \end{equation} \item The shift in the electric charge accounted for by the factor $e^{-2\pi i \rho}$. \end{itemize} Combining all the contributions we obtain \begin{eqnarray}\nn && Z^{5d}_{T^5/\mathbb{Z}_N}= -e^{-2\pi i \rho} \prod_{\begin{smallmatrix}k\in \mathcal{Z}+r/N,l\in \mathcal{Z},\\ j\in \mathcal{Z}\\ k>0,l\ge 0\end{smallmatrix}}(1-e^{2\pi i(k\sigma+ l\rho+jv)})^{-\sum_s e^{-2\pi isl/N}c^{(r,s)}(4kl-j^2)}\\ \nn && \times \prod_{l=1}^{\infty} \left[ (1-e^{2\pi i (Nl\rho+v)})^{-2} (1-e^{2\pi i (Nl\rho-v)})^{-2} (1-e^{2\pi i Nl\rho})^4 \right] (e^{\pi i v}-e^{-\pi i v})^{2} \times Z_{{\rm Wilson}: T^4/\mathbb{Z}_N} \\ & & \qquad \qquad \qquad N=2 . \label{d5dtor} \end{eqnarray} The partition function of the BMPV black hole in the $T^5/\mathbb{Z}_3$ is obtained model is given by the same expression as in (\ref{d5dtor}) except that the coefficients $c^{(r, s)}$ must be read out from the functions given in (\ref{3tortwist}) and $N\rightarrow 3$. \subsection{Orbifolds of $K3\times S^1$ } We now construct the hair modes in $5$ dimensions for the $K3\times S^1/g'$ where the quotient by $g'$ associated with any conjugacy classes of the Mathieu group $M_{23}$. The analysis proceeds identical to that done in \cite{Jatkar:2009yd}, the only difference being that the unit of momentum on $S^1$ is $N$. Here we briefly state the contributions. \begin{itemize} \item The contribution of the 4 real left moving gravitino deformations of the BMPV black hole \footnote{ The bosonic deformations were shown to be singular at the horizon in \cite{Jatkar:2009yd}. }. \begin{eqnarray} Z_{{\rm hair}: \; g'}^{5d; \; f} = \prod_{l = 1}^\infty ( 1- e^{2\pi il N \rho})^4. \end{eqnarray} \item The contribution of the $8$ real gravitino zero modes among the $12$ modes due to broken supersymmetries which carry angular momentum $J=\pm \frac{1}{2}$. \begin{eqnarray} Z_{{\rm hair}:\; g' }^{5d; \; {\rm zero\;modes} } = ( e^{\pi iv} - e^{- \pi i v} )^4. \end{eqnarray} \end{itemize} Combining these contributions we obtain \begin{eqnarray} \label{5dhair} Z_{\rm{hair}: \; g'}^{5d} = ( e^{\pi iv} - e^{- \pi i v} )^4 \prod_{l = 1}^\infty ( 1- e^{2\pi il N \rho})^4. \end{eqnarray} The partition function for the horizon states is given by \begin{eqnarray}\label{5dhor} Z_{{\rm hor}: \; g'}^{5d} = \frac{Z^{5d}_{g'}}{ Z_{\rm{hair}: \; g'}^{5d} } . \end{eqnarray} Now comparing the horizon states of the $4d$ dyons from (\ref{horparti2}) and using (\ref{d5d}) and (\ref{5dhair}) in (\ref{5dhor}) we can easily conclude \begin{equation} Z_{{\rm hor}: \; g'}^{5d} = Z_{{\rm hor}: \; g'}^{4d}. \end{equation} \subsection{Toroidal models} For the toroidal models the contributions of the hair are as follows. \begin{itemize} \item The contribution of the $4$ left moving gravitino modes which result in \begin{eqnarray} Z_{{\rm hair}: \; T^5/\mathbb{Z}_{N}}^{5d; \; f} = \prod_{l = 1}^\infty ( 1- e^{2\pi il N \rho})^4 , \qquad\qquad N =2, 3. \end{eqnarray} \item The contirbution of the zero modes. As we discussed earlier, supersymmetry in these models is tied to the radius of $S^1$. We propose that due to this, out of $8$ gravitino zero modes arising from broken supersymmetries which has angular momentum $J=\pm \frac{1}{2}$, the wave functions of $4$ of them either become singular at the horizon or they not localized outside the horizon. These $4$ modes should not be counted as hair modes. Therefore the contribution of the zero modes in these models are given by \begin{eqnarray} Z_{{\rm hair}: \; T^5/\mathbb{Z}_{N}}^{5d; \; {\rm zero\; modes} } =- ( e^{\pi i v} - e^{- \pi i v } )^2. \end{eqnarray} As we will see consistency checks for this proposal will be done in section (\ref{signind}). \end{itemize} Combining these contributions we obtain \begin{eqnarray}\label{5dhairtor} Z_{\rm{hair}: \; T^5/\mathbb{Z}_N}^{5d} = -( e^{\pi iv} - e^{- \pi i v} )^2 \prod_{l = 1}^\infty ( 1- e^{2\pi il N \rho})^4. \end{eqnarray} The horizon partition function from the $5d$ perspective is given by \begin{equation}\label{5dhortor} Z_{\rm{hor}: T^5/\mathbb{Z}_N}^{5d} = \frac{ Z^{5d}_{T^5/\mathbb{Z}_N}} {Z_{\rm{hair}: \; T^5/\mathbb{Z}_N}^{5d} }. \end{equation} Comparing the $4d$ horizon partition function given in (\ref{4dhortor}) and using (\ref{d5dtor}) and (\ref{5dhairtor}) in (\ref{5dhortor}) we see that \begin{equation} Z_{\rm{hor}: T^5/\mathbb{Z}_N}^{5d} = Z_{\rm{hor}: T^6/\mathbb{Z}_N}^{4d} . \end{equation} \section{The sign of the index for horizon states} \label{signind} In this section we will address the main goal of the paper. We observe that the index of horizon states is always positive. \subsection{Canonical example: $K3\times T^2$} For the un-orbifolded model recall that the hair in $4d$ is given by \be\label{hair2b} Z_{ {\rm hair } : 1A}^{4d}=\prod_{l =1}^{\infty} \frac{1}{(1-e^{2\pi i l\rho})^{20}}. \ee The partition function of the horizon states is obtained by \begin{eqnarray} \label{horstate} Z_{{\rm hor} :1A} &=& \frac{1}{\Phi_{10}( \rho, \sigma, v) Z_{ {\rm hair } : 1A}^{4d}} = \frac{ \prod_{l=1}^\infty ( 1- e^{2\pi i l \rho} ) }{ \Phi_{10} ( \rho, \sigma, v) }. \end{eqnarray} It was observed in \cite{Sen:2010mz} that the index $-B_6$ or the Fourier coefficients of $1/\Phi_{10}$ extracted using the contour in (\ref{contour}) subject to the kinematic restrictions \begin{eqnarray} \label{keres} Q.P \geq 0, \quad Q\cdot P \leq Q^2, \quad Q\cdot P \leq P^2, \quad Q^2, P^2, ( Q^2 P^2 - (Q.P)^2 ) >0 \end{eqnarray} were positive. The contour together with the above kinematic constraints ensures that the index counts single centred dyons. Further more \citep{Bringmann:2012zr} proved that the index of all single centered dyons with $P^2 = 2, 4$ is positive. These works assumed that there existed a frame in which the fermionic zero modes associated with broken supersymmetries were the only hair. We have seen that the type IIB frame the hair degrees of freedom is given by (\ref{hair2b}). Now naively it seems from the expression for the horizon states in (\ref{horstate}) there are negative terms introduced due to the factor in the numerator and the observation of positivity seen in (\citep{Sen:2010mz}) and \citep{Bringmann:2012zr} might be violated once the hair in the type IIB frame is factored out. However we will show by adapting the proof of \citep{Bringmann:2012zr} that single centred dyons with $P^2 =2$ do have positive index. For other values of charges we evaluate the index numerically, our results are presented in table \ref{k3}. We observe that for single centered dyons the index is indeed positive. \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 &4\\ & & & & & \\ \hline & & & & & \\ (2, 2) & 28944 & 13863 & 1608 & 327 & 0 \\ (2, 4) & 761312 & 406296 & 72424 & 6936 & {$-$ {648}}\\ (2, 6) & 12324920 & 6995541 & 1423152 & 96619 & { {$-13680$}}\\ (2, 8) & 148800072 & 88006584 & 19366320 & 1152216 & {$-${164244}}\\ (4, 2) & 272832 & 154236 & 28944 & 1836 & {$-$ {648}}\\ (4, 4) & 12980224 & 8595680 & 2665376 & 406296 & 25760 \\ (4, 6) & 333276712 & 235492308 & 85781820 & 16141380 & 1423152 \\ (6, 6) & 6227822652 & 4771720755 & 2158667028 & 572268361 & 85781820 \\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for $K3\times T^2$, note that negative numbers have zero or negative values for $Q^2P^2 -(Q\cdot P)^2 $.}\label{k3} \renewcommand{\arraystretch}{0.5} \end{table} \subsubsection*{Proof of positivity at $P^2=2$} We can do a Fourier expansion of $\frac{1}{\Phi_{10}(\tau,\sigma,z)}${\footnote{We use the variable $\tau$ instead of $\rho$ and $z$ in place of $v$ to keep consistency with previous work \citep{Chattopadhyaya:2018xvg}}} in terms of Jacobi forms. \begin{equation}\label{fjdecomp} \frac{1}{\Phi_{10} ( q, p, y ) } = \sum_{m = -1}^\infty \psi_m (\tau, z ) p^m, \qquad q = e^{2\pi i \tau}, p = e^{2 \pi i \sigma}, y = e^{2\pi i z} . \end{equation} $\psi_m(\tau, z) \eta^{24}(\tau) $ is a weak Jacobi form of of weight 2 and index $m$. In \cite{Dabholkar:2012nd} it was shown that $\psi_m(\tau, z) $ admits the following decomposition \begin{equation} \psi_m(\tau, z) = \psi_m^{\rm P} (\tau, z) + \psi_m^{\rm F} (\tau, z), \end{equation} where, $\psi_m^{\rm F} (\tau, z)$ has no poles in $z$. The polar part is given by an Appell-Lerch sum: \begin{eqnarray} \psi_m^{\rm P} ( \tau, z) = \frac{p_{24}(m+1)}{\eta^{24}(\tau) } {\cal A}_{2, m }(\tau, z) , \\ \nonumber {\cal A}_{2, m }(\tau, z) = \sum_{s\in \mathbb{Z}} \frac{q^{ms^2 +s} y^{2ms +1}}{ (1 - q^s y )^2} . \end{eqnarray} At $P^2=2$ we have $m=1$ and we can write \be \psi_1^{\rm F} (\tau, z)=-\frac{3}{\Delta}(E_4 B(\tau,z)+216{\cal H}(\tau,z)). \ee We need to show that $\psi_1^h=-\frac{3}{q\prod(1-q^n)^4}(E_4 B(\tau,z)+216{\cal H}(\tau,z))$ has the positivity property. Here ${\cal H}$ is the simplest Jacobi mock modular form defined by the Hurwitz-Kronecker class numbers \begin{eqnarray} {\cal H}(\tau, z) = \sum_{n =0}^\infty H( 4n - j^2) q^n y^l . \end{eqnarray} The coefficients $H(n)$ are defined by \begin{eqnarray} H( n) &=& 0 \qquad \hbox{for} \; n <0, \\ \sum_{ n\in\mathbb{Z} } H(n) q^n &=& -\frac{1}{12} +\frac{1}{3} q^3 + \frac{1}{2} q^4 + q^7 + q^8 + q^{11} + \cdots\\ {\cal H}(\tau,z) &=& \theta_3(2\tau, 2z)h_0(\tau)+\theta_2(2\tau, 2z)h_1(\tau). \end{eqnarray} We can write the weak Jacobi form $B(\tau,z)$ given in (\ref{ab}) as: \be B(\tau,z)=\frac{\theta_1^2(\tau,z)}{\eta^6}=\frac{1}{\eta^6}(\theta_2(2\tau)\theta_3(2\tau,2z)-\theta_3(2\tau)\theta_2(2\tau,2z)) \ee where, $\theta_2(\tau,z)=\sum_{n\in\mathbb{Z}}q^{\frac{(n+1/2)^2}{2}}y^{n+1/2}$ and $\theta_3(\tau,z)=\sum_{n\in\mathbb{Z}}q^{n^2/2}y^n$ and $y=e^{2\pi i z}$. So we see that even and odd powers of $y$ are separated in $\psi_1^F$ by the two theta functions. With this we can write $\psi_1^F$ and $\psi_1^h$ as follows: \begin{eqnarray}\nn \psi_1^F=\frac{3}{\Delta}\left(\theta_2(2\tau, 2z) \left(\frac{\theta_3(2\tau)}{\eta^6}E_4-216 h_1(\tau)\right)-\theta_3(2\tau, 2z)\left(\frac{\theta_2(2\tau)}{\eta^6}E_4+216 h_0(\tau)\right)\right)\\ \nn \psi_1^h=\frac{3}{\Delta_4}\left(\theta_2(2\tau, 2z) \left(\frac{\theta_3(2\tau)}{\eta^6}E_4-216 h_1(\tau)\right)-\theta_3(2\tau, 2z)\left(\frac{\theta_2(2\tau)}{\eta^6}E_4+216 h_0(\tau)\right)\right),\\ \label{psi1h} \end{eqnarray} where $\Delta_4=q\prod_{n=1}^{\infty}(1-q^n)^4$. We know the following results, \begin{enumerate} \item The Fourier coefficients in $h_0(\tau)$ and $h_1(\tau)$ are positive except for $q^0$ in $h_0(\tau)$ \cite{Bringmann:2012zr}. \item All Fourier coefficients in the $q$ expansion of $\frac{\theta_2(2\tau)}{\eta^6} $ or $\frac{\theta_3(2\tau)}{\eta^6} $ are positive. \item $E_4=1+240\sum_{n=1}^{\infty}\sigma_3(n) q^n$, where $\sigma_3(n)$ is given by, $\sum_{d, d|n} d^3$. \end{enumerate} Let us observe the expression: $\left(\frac{\theta_2(2\tau)}{\eta^6}E_4+216 h_0(\tau)\right)$. The only negative Fourier coefficient appears at $q^0$. We can prove the folowing lemma: \begin{lemma} For a function $f(q)=-1+\sum_{n=1}^{\infty} a(n)q^n$ having all positive $a(n)$, the function $\frac{f(q)}{\prod_{n=1}^{\infty}(1-q^n)^k}$ has positive coefficients as long as $a(1)>k$ and $a(n+1)>k $ for all $n\in\mathbb{N}$. \end{lemma} \begin{proof} We prove this for $\frac{1}{(1-q)^k}$ and then the rest can be similarly proved by using $q\rightarrow q^r$ and taking $f_{r+1}(q)=\frac{f_r(q)}{(1-q^{r+1})^k}$. For $f_2$ the coefficient of $q^1$ is, $a(1)-k>0$ and the coefficient of $q^N$ for $N>1$ is given by, \[ -\binom{N+k-1}{N}+ \binom{N+k-2}{N-1}a(1)+ \binom{N+k-3}{N-2}a(2)\cdots > k.\] \end{proof} We can write \be \frac{1}{16}\left(\frac{\theta_2(2\tau)}{\eta^6}E_4+216 h_0(\tau)\right)=-1+\sum_{n=1}^{\infty}a(n)q^n. \ee Here $a(1)>15\sigma(1)>4$. Hence the removal of hair degrees of freedom ensures positivity of $-B_6$ for the sector $Q\cdot P={\rm even}$ when $Q^2\ge 0$. In the series asociated with $\theta_2(2\tau,2z)$ in equation (\ref{psi1h}) the Fourier coefficient of $q^{n-1/4}$ is bounded from below by, \[10\sigma_3(n)-9H(4n-1).\] Its positivity is ensured starting from $n=2$ using the following bounds: \begin{enumerate} \item $\sigma_3(n) \ge n^3$, \item $H(n)<n$ \citep{Bringmann:2012zr}. \end{enumerate} For $n=1$ the positivity still holds as $H(3)=1/3$. So the complete $q$ series expansion of $\left(\frac{\theta_3(2\tau)}{\eta^6}E_4-216 h_1(\tau)\right)$ contains no negative Fourier coefficient. This could also be seen from the Fourier expansion of $\left(\frac{\theta_3(2\tau)}{\eta^6}E_4-216 h_1(\tau)\right)$, \be \left(\frac{\theta_3(2\tau)}{\eta^6}E_4-216 h_1(\tau)\right)=q^{-1/4}(1+176q+\cdots). \ee This ensures the positivity of $-B_6$ for $Q\cdot P={\rm odd}$ and hence for $\psi_1^h$ as expected for $P^2=2$. \subsection{Orbifolds of $K3\times T^2$} For the $2A$ orbifold we extract the index of single centred dyons by using the contour in (\ref{contour}) together with the following kinematic constraints on the charges \cite{Sen:2010mz}. \begin{eqnarray}\label{tor2reg} Q^2>0, \; P^2>0, \; Q.P\ge 0, \; P^2Q^2-(Q\cdot P)^2>0, \\ \nn 2Q^2\ge Q\cdot P, \; P^2 \ge Q\cdot P, \; P^2+2Q^2\ge 3 Q\cdot P. \end{eqnarray} The index of the horizon states for the $2A$ orbifold is given in table \ref{chl2}. \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (1, 2) & 580 & 176 & $-2$ & 0 & 0\\ (1, 4) & 5504 & 1856 & 32 & 0 & 0\\ (1, 6) & 41476 & 16200 & 996 & 52 & 0\\ (1, 10) &1293256 & 589200 & 63556 & 2752 &$-104$\\ (2, 2) & 1312 & 576 & 48 & 0 & 0 \\ (2, 4) & 16896 & 8640 & 1280 & 64 & 0 \\ (3, 2) & 9708 & 4696 & 580 & 52 & 0\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $2A$ orbifold of $K3$ }\label{chl2} \renewcommand{\arraystretch}{0.5} \end{table} The kinematic constraints on the charges for the $3B$ orbifold so that the dyons are single centered are given by \begin{eqnarray} \{Q^2, \; P^2, P^2Q^2-(Q\cdot P)^2\}>0 \; Q.P\ge 0, 3Q^2\ge Q\cdot P, \; P^2 \ge Q\cdot P,\\ \nn 2P^2+3Q^2\ge 5 Q\cdot P,\; P^2+6Q^2\ge 5 Q\cdot P, \; 2P^2+6Q^2\ge 7 Q\cdot P. \end{eqnarray} The index for the horizon states is then obtained using contour (\ref{contour}) and is listed in table \ref{chl3}. \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (2/3, 2) & 216 & 27 & 0 & 0 & 0 \\ (2/3, 4) & 1548 & 342 & 0 & 0 & 0\\ (2/3, 6) & 8532 & 2430 & 54 & 0 & 0\\ (4/3, 2) & 540 & 216 & 0 & 0 & 0\\ (4/3, 4) & 5820 & 2698 & 136 & 0 & 0\\ (2, 2) & 1728 & 621 & 54 & 0 & 0\\ (2, 6) & 204264 & 117837 & 23400 & 765 & 0\\ (2, 8) & 1440288 & 896670 & 216540 & 13932 & $54$\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $3A$ orbifold of $K3$}\label{chl3} \renewcommand{\arraystretch}{0.5} \end{table} For an orbifold of order $N>3$ there are infinite set of constraints for the charges to ensure that the index corresponds to single centered dyons \cite{Sen:2010mz}. However we see as long as the norms of electric and magnetic charges are positive and $Q\cdot P\ge 0$ together with $Q^2P^2-(Q\cdot P)^2>0$, the index $-B_6$ remains positive for the orbifolds of $K3$ (see the tables \ref{table4b}-\ref{table23a}). These orbifolds maybe geometric like that of CHL or even non-geometric where $g'\in [M_{23}]$. \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (1/2, 2) & 64 & 8 & 0 & 0 & 0\\ (1/2, 4) & 288 & 80 & 0 & 0 & 0\\ (1/2, 6) & 1088 & 464 & 24 & 0 & 0\\ (1, 2) & 96 & 48 & 0 & 0 & 0\\ (1, 4) & 464 & 480 & 16 & 0& 0\\ (3/2, 4) & 640 & 1680 & 160 & 0 & 0 \\ (3/2, 6) & 3958 & 11448 & 2026 & 38 & 0\\ (3/2, 22) & 232188670 & 421276388 & 228036842 & 43979890 & 2695862\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $4B$ orbifold of $K3$} \label{table4b} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (2/5, 2) & 44 & 1 & 0 & 0 & 0 \\ (2/5, 4) & 220 & 20 & 0 &0 & 0 \\ (2/5, 6) & 880 & 125 & 0 & 0 & 0 \\ (4/5, 2) & 88 & 16 & 0 & 0 & 0 \\ (4/5, 4) & 560 & 160 & 0 & 0 & 0 \\ (6/5, 6) & 8360 & 3755 & 310 & 0 & 0\\ (6/5, 8) & 37394 & 18720 & 2202 & 16 & 0\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $5A$ orbifold of $K3$} \label{table5a} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (1/3, 2) & 24 & 1 & 0 & 0 & 0\\ (1/3, 4) & 92 & 12 & 0 & 0 & 0\\ (1/3, 6) & 318 & 49 & 0 & 0 & 0\\ (2/3, 2) & 44 & 10 & 0 & 0 & 0\\ (2/3, 4) & 236 & 68 & 0 & 0 & 0\\ (1, 4) & 564 & 216 & 8 & 0 & 0 \\ (1, 6) & 2702 & 1201 & 100 & 0 & 0\\ (1/3, 34) & 15836220 & 6614053 & 409414 & 1789 & $-14$\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states $6A$ orbifold of $K3$}\label{table6a} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (2/7, 2) & 18 & 0 & 0 & 0 & 0\\ (2/7, 4) & 72 & 3 & 0 & 0 & 0\\ (2/7, 6) & 240 & 18 & 0 & 0 & 0\\ (4/7, 2) & 30 & 3 & 0 & 0 & 0 \\ (4/7, 4) & 150 & 31 & 0 & 0 & 0\\ (6/7, 8) & 5580 & 2304 & 0 & 0 & 0\\ (2/7, 40) & 46940778 & 18696804 & 1139238 & 4689 & $-18$\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $7A$ orbifold of $K3$} \label{7a} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (1/4, 2) & 12 & 0 & 0 & 0 & 0\\ (1/4, 4) & 40 & 2 & 0 & 0 & 0\\ (1/4, 6) & 124 & 10 & 0 & 0 & 0\\ (1/2, 2) & 20 & 2 & 0 & 0 & 0\\ (1/2, 4) & 88 & 16 & 0 & 0 & 0\\ (3/4, 4) & 176 & 52 & 0 & 0 & 0\\ (3/4, 6) & 708 & 248 & 6 & 0 & 0 \\ (1/4, 46) & 37469836 & 15088039 & 845410 & 2491 & $-10$\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $8A$ orbifold of $K3$} \label{table8a} \renewcommand{\arraystretch}{0.5} \end{table} It is interesting to see that the index for horizon states even in non-geometric orbifolds of $K3$ retains positivity of the index in the domain $NQ^2\ge Q\cdot P, P^2 \ge Q\cdot P, Q^2 P^2-(Q\cdot P)^2>0.$ \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (2/11, 2) & 6 & 0 & 0 & 0 & 0\\ (2/11, 4) & 18 & 0 & 0 & 0 & 0\\ (2/11, 6) & 50 & 1 & 0 & 0 & 0\\ (4/11, 2) & 8 & 0 & 0 & 0 & 0\\ (4/11, 4) & 32 & 4 & 0 & 0 & 0\\ (6/11, 8) & 592 & 172 & 2 & 0 & 0\\ (6/11, 10) & 1568 & 527 & 16 & 0 & 0\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the 11A orbifold of $K3$}\label{table11a} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (1/7, 2) & 3 & 0 & 0 & 0 & 0\\ (1/7, 4) & 7 & 0 & 0 & 0 & 0\\ (1/7, 6) & 18 & 0 & 0 & 0 & 0\\ (2/7, 2) & 4 & 0 & 0 & 0 & 0\\ (2/7, 4) & 14 & 1 & 0 & 0 & 0\\ (3/7, 8) & 163 & 45 & 0 & 0 & 0\\ (3/7, 10) & 390 & 116 & 2 & 0 & 0\\ (4/7, 10) & 774 & 329 & 14 & 0 & 0\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $14A$ orbifold of $K3$}\label{table14a} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (2/15, 2) & 3 & 0 & 0 & 0 & 0\\ (2/15, 4) & 6 & 0 & 0 & 0 & 0\\ (2/15, 6) & 15 & 0 & 0 & 0 & 0\\ (4/15, 2) & 3 & 1 & 0 & 0 & 0\\ (4/15, 4) & 10 & 4 & 0 & 0 & 0\\ (2/5, 8) & 125 & 31 & 0 & 0 & 0\\ (2/5, 10) & 277 & 80 & 1 & 0 & 0\\ (8/15, 10) & 527 & 227 & 9 & 0 & 0\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $15A$ orbifold of $K3$}\label{table15a} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (2/23, 2) & 1 & 0 & 0 & 0 & 0\\ (2/23, 4) & 2 & 0 & 0 & 0 & 0\\ (2/23, 6) & 5 & 0 & 0 & 0 & 0\\ (4/23, 2) & 14 & 2 & 0 & 0 & 0\\ (4/23, 4) & 28 & 4 & 0 & 0 & 0\\ (6/23, 8) & 87 & 36 & 4 & 0 & 0\\ (6/23, 10) & 144 & 57 & 6 & 0 & 0\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{ Index of horizon states for the $23A$ orbifold of $K3$}\label{table23a} \renewcommand{\arraystretch}{0.5} \end{table} \subsection{Toroidal orbifolds} In \cite{Chattopadhyaya:2018xvg} we have seen that positivity of index for single centred dyons was violated for the toroidal models. For completeness we have reproduced some of the indices evaluated in \cite{Chattopadhyaya:2018xvg} in tables \ref{qp0}, \ref{qp1}, \ref{qp2} \begin{table}[H] \footnotesize{ \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ $Q^2\;\;\;$ \textbackslash \textbackslash $P^2$ & 2 & 4 & 6 & 8 \\ & & & & \\ \hline & & & & \\ 1 & {\bf -224} & {\bf -1248} & 1728 & 95104 \\ 2 & 1152 & 18240 & 233984 & 2432544 \\ 3 & {\bf -3392} &{\bf -10320} & 542976 & 12103360 \\ 4 & -11520 & 200736 & 4575744 & 86712256 \\ 5 & {\bf -30336} & {\bf -55424} & 12914944 & 412163328 \\ 6 & 83968 & 1544832 & 61928448 & 2013023104 \\ 7 & {\bf -202560} & {\bf -179022} & 175358304 & 8292093664\\ 8 & 496512 & 9480000 & 638922240 & 32998944096 \\ 9 & {\bf -1118496} & {\bf -155232} & 1735394112 & 119618619520 \\ 10 & 2521600 & 49523328 & 5364983808 & 415768863360 \\ \hline \end{tabular} \end{center} \vspace{0.5cm} \caption{The index $d(Q, P) $ for the $\mathbb{Z}_2$ toroidal orbifold some low lying values of $Q^2$, $P^2$ with $Q\cdot P=0$. }\label{qp0} \renewcommand{\arraystretch}{0.5} } \end{table} \begin{table}[H] \footnotesize{ \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ $Q^2\;\;\;$ \textbackslash \textbackslash $P^2$ & 2 & 4 & 6 & 8 \\ & & & & \\ \hline & & & & \\ 1 & 96 & 1968 & 22528 & 190047 \\ 2 & {\bf -256} & 840 & 70912 & 1127672 \\ 3 & 1376 & 34656 & 728256 & 11046139 \\ 4 & {\bf -3840} & 16632 & 2497408 & 61486056 \\ 5 & 13152 & 343152 & 13144832 & 348876305 \\ 6 & {\bf -33536} & 171152 & 42058240 & 1603241304 \\ 7 & 92928 & 2476752 & 162898624 & 7016918625 \\ 8 & {\bf -220672} & 1265256 & 480911872 & 27503872048 \\ 9 & 540416 & 14545584 & 1556561664 & 102315259287 \\ 10 & {\bf -1204992} & 7558560 & 4271142656 & 354800345088 \\ \hline \end{tabular} \end{center} \vspace{0.5cm} \caption{ The index $d(Q, P) $ for the $\mathbb{Z}_2$ toroidal orbifold some low lying values of $Q^2$, $P^2$ with $Q\cdot P=1$. }\label{qp1} \renewcommand{\arraystretch}{0.5} } \end{table} \begin{table}[H] \footnotesize{ \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ $Q^2\;\;\;$ \textbackslash \textbackslash $P^2$ & 2 & 4 & 6 & 8 \\ & & & & \\ \hline & & & & \\ 1& 0 & {- 12} & {- 224} & { -1248} \\ 2& 64 & 2592 & 43264 & 491904 \\ 3& {\bf- 224} & 2432 & 191168 & 3805600 \\ 4& 1152 & 43392 & 1440256 & 30853488 \\ 5& {\bf -3392} & 33720 & 5363680 & 171782688 \\ 6& 11520 & 414336 & 24533248 & 893029504 \\ 7& {\bf -30336} & 302400 & 80281536 & 3963098880 \\ 8& 83968 & 2926080 & 287831552 & 16432262672 \\ 9& {\bf -202560} & 2049968 & 851816352 & 62214237440 \\ 10& 496512 & 16919712 & 2627695616 & 222752294016 \\ \hline \end{tabular} \end{center} \vspace{0.5cm} \caption{The index $d(Q, P) $ for the $\mathbb{Z}_2$ toroidal orbifold some low lying values of $Q^2$, $P^2$ with $Q\cdot P=2$. }\label{qp2} \renewcommand{\arraystretch}{0.5} } \end{table} \subsection*{Positivity of the horizon states for toroidal models} The indices in tables \ref{qp0}, \ref{qp1}, \ref{qp2} were obtained under the assumption that there exists a frame in which the fermionic zero modes associated with broken supersymmetries are the only hair. In (\ref{4dtorhair1}) and (\ref{4dtorhair2}) we have proposed the partition function for the hair degrees of freedom in the type IIB frame for the $\mathbb{Z}_2, \mathbb{Z}_3$ toroidal orbifolds respectively. We evaluate the indices of horizon states in the following tables (\ref{tabletor1}-\ref{tabletor6}) and observe that they are all positive for single centered dyons. \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ $Q^2\;$ {\large{\textbackslash\textbackslash}} $P^2$ & 2 & 4 & 6 & 8\\ & & & & \\ \hline & & & & \\ 1 & 832 & 14816 & 158848 & 1283902 \\ 2 & 3840 & 101008 & 1425920 & 14471264 \\ 3 & 14624 & 556176 & 10273024 & 129971582 \\ 4 & 48128 & 2588336 & 62037760 & 971443680 \\ 5 & 143424 & 10594400 & 325402624 & 6254176746 \\ 6 & 394112 & 39145344 & 1521266688 & 35582718576 \\ 7 & 1016080 & 133122060 & 6465235840 & 182481593350 \\ 8 & 2480512 & 422430736 & 25355844096 & 856661245280 \\ 9 & 5786240 & 1264061344 & 92844570752 & 3726638152610 \\ 10 & 12968576 & 3595680768 & 320340466176 & 15170555788976\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $\mathbb{Z}_2$ orbifold of $T^6$ for $Q\cdot P=0$.} \label{tabletor1} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ $Q^2\;$ {\large{\textbackslash\textbackslash}} $P^2$ & 2 & 4 & 6 & 8\\ & & & & \\ \hline & & & & \\ 1 & 480 & 9012 & 98784 & 811166 \\ 2 & 2496 & 69328 & 1001472 & 10329280 \\ 3 & 9888 & 403448 & 7664064 & 98689790 \\ 4 & 33664 & 1946480 & 48074496 & 766539920 \\ 5 & 102272 & 8155848 & 258619232 & 5063997322 \\ 6 & 286208 & 30667504 & 1231379200 & 29352001136 \\ 7 & 747456 & 105699406 & 5306269024 & 152656500694 \\ 8 & 1847040 & 339109664 & 21040306176 & 724593923536 \\ 9 & 4350816 & 1024054008 & 77737446688 & 3180401982114 \\ 10 & 9841408 &2935991504& 270248202752& 13043376086768\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $\mathbb{Z}_2$ orbifold of $T^6$ for $Q\cdot P=1$.} \label{tabletor2} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ $Q^2\;$ {\large{\textbackslash\textbackslash}} $P^2$ & 2 & 4 & 6 & 8\\ & & & & \\ \hline & & & & \\ 1 & 96 & 1880 & 21056 & 178660 \\ 2 & 640 & 21312 & 329728 & 3577216 \\ 3 & 2992 & 151056 & 3115712 & 42306045 \\ 4 & 11008 & 813280 & 22062720 & 371908656 \\ 5 & 35840 & 3669600 & 128569280 & 2665839255 \\ 6 & 105472 & 14554120 & 647882496 & 16372365048 \\ 7 & 288192 & 52296704 & 2913889600 & 88924896642 \\ 8 & 738560 & 173535528 & 11950263808 & 436628175032 \\ 9 & 1798688 & 539123792 & 45385181120 & 1969579830259 \\ 10 & 4187008 & 1583791144 & 161466383616 & 8262793111120 \\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $\mathbb{Z}_2$ orbifold of $T^6$ for $Q\cdot P=2$.} \label{tabletor3} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ $Q^2\;$ {\large{\textbackslash \textbackslash}} $P^2$ & 2 & 4 & 6 & 8\\ & & & & \\ \hline & & & & \\ 1 & 0 & $-$12 & $-$224 & $-$1046 \\ 2 & 64 & 2480 & 40960 & 484752 \\ 3 & 320 & 26590 & 632544 & 9430780 \\ 4 & 1408 & 178096 & 5723136 & 106304080 \\ 5 & 5088 & 916872 & 38694432 & 887612004 \\ 6 & 16896 & 4001712 & 215960576 & 6052758272 \\ 7 & 50432 & 15481304 & 1047526432 & 35500683214 \\ 8 & 140352 & 54572672 & 4557481728 & 184959084864 \\ 9 & 365536 & 178371800 & 18160058144 & 874917932484 \\ 10 & 905600 & 547471520 & 67260039168 & 3817189761008 \\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $\mathbb{Z}_2$ orbifold of $T^6$ for $Q\cdot P=3$. Note that it is only when $Q^2P^2 - ( Q\cdot P)^2 <0$ we observe that the index is negative.} \label{tabletor4} \renewcommand{\arraystretch}{0.5} \end{table} \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ $Q^2\;$ {\large{\textbackslash\textbackslash}} $P^2$ & 2 & 4 & 6 & 8\\ & & & & \\ \hline & & & & \\ 1 & 0 & 0 & 0 & 37 \\ 2 & 0 & $-8 $& $-256$ & 1232 \\ 3 & 16 & 1900 & 50880 & 868435 \\ 4 & 0 & 17928 & 757376 & 16261008 \\ 5& 96 & 114160 & 6613888 & 176919248 \\ 6 & 512 & 576016 & 43399680 & 1427632608 \\ 7 & 2416 & 2506512 & 236442496 & 9431113673 \\ 8 & 8320 & 9731384 & 1124958848 & 53751377384 \\ 9 & 26592 & 34532368 & 4818946176 & 272969682473 \\ 10 & 75904 & 113759408 & 18960610304 & 1262218427744 \\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $\mathbb{Z}_2$ orbifold of $T^6$ when $Q\cdot P=4$. Note that only when $Q^2P^2 - ( Q\cdot P)^2 <0$ we observe that the index is negative.} \label{tabletor5} \renewcommand{\arraystretch}{0.5} \end{table} We now enumerate the consistency checks we have done for the proposal for the hair modes in the $T^6/\mathbb{Z}_2$ toroidal model given in (\ref{4dtorhair1}). \begin{enumerate} \item If we do not include the zero modes $- e^{2\pi i v} ( 1- e^{2\pi i v} )^{-2}$ as part of the hair partition function in $T^6/\mathbb{Z}_2$, then we observe the violation of positivity in index for $P^2=6, Q^2=1, Q\cdot P=2$ and $P^2=6, Q^2=2, Q\cdot P=3$. The indices for these dyonic charges are $-224$ and $-256$ respectively. These charges are within the kinematic domain defined by (\ref{tor2reg}). \item If we include the contribution of the Wilson lines given in (\ref{wilson1}) as part of the hair partition function and remove the contribution of the zero modes $- e^{2\pi i v} ( 1- e^{2\pi i v} )^{-2}$, we find violations in positivity of the index. This can be observed at $P^2=6,\; Q^2=1,\; Q\cdot P=2$, $P^2=6,\; Q^2=2$, $Q\cdot P=3$, $P^2=4, \; Q^2=4, Q\cdot P=3$, the indices are given by $-64, -64, -4$ respectively. \end{enumerate} These two observations show that we certainly need to include the contribution of the zero modes $- e^{2\pi i v} ( 1- e^{2\pi i v} )^{-2}$ as part of the hair partition function which is consistent with our proposal. It would be interesting to prove this by studying the wave function of the gravitino zero modes in the toroidal models. A very similar analysis holds true for $T^6/\mathbb{Z}_3$. The index of horizon states obtained by considering the proposal given in (\ref{4dtorhair1}) for the hair partition function is positive as shown in the subsequent tables. We have also repeated the consistency checks we mentioned earlier for the $\mathbb{Z}_2$ orbifold in this case with the same conclusions. \begin{table}[H] \renewcommand{\arraystretch}{0.5} \begin{center} \vspace{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ $(Q^2,\;P^2)\;\;\;$ \textbackslash $Q\cdot P$ & 0 & 1 & 2 & 3 & 4\\ & & & & & \\ \hline & & & & & \\ (2/3, 2) & 162 & 90 & 9 & 0 & 0 \\ (2/3, 4) & 1944 & 1134 & 162 & 0 & 0\\ (2/3, 6) & 14598 & 8748 & 1149 & 0 & 0\\ (4/3, 2) & 540 & 324 & 72 & 0 & 0\\ (4/3, 4) & 8856 & 5724 & 1458 & 54 & 0\\ (2, 2) & 1566 & 1008 & 243 & 18 & 0\\ (2, 4) & 34344 & 23652 & 7290 & 810 & 0\\ (2, 6) & 402972 & 286734 & 98613 & 13614 & 249 \\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \caption{Index of horizon states for the $T^6/\mathbb{Z}_3$ orbifold} \label{tabletor6} \renewcommand{\arraystretch}{0.5} \end{table} \section{Conclusions} \label{conclusions} We have constructed the horizon partition function of the $1/4$ BPS dyonic black hole in ${\cal N}=4$ theories obtained by compactifying type IIB on orbifolds of $K3\times T^2$. We then observed that the index of the horizon states of single centred black holes are all positive. We adapted the proof of \cite{Bringmann:2012zr} and showed that the index of the horizon partition function of single centred dyons with $P^2=2$ remains positive. For the toroidal models we propose that the hair modes are given by (\ref{4dtorhair1}) and (\ref{4dtorhair1}). We showed the index of horizon states with this proposal is positive and performed consistency checks. As mentioned earlier it would be interesting to study the wave function of the zero modes of the gravitino in the toroidal models to check the proposal in (\ref{4dtorhair1}) and (\ref{4dtorhair2}). In \cite{Chattopadhyaya:2018xvg} it was noticed that that the index of single centred dyons in these models were not positive when one assumed that the only hair modes are the Fermionic zero modes associated with broken supersymmetry generators. Since hair modes are frame dependent, the observations in this paper indicates that there is possibly no duality frame for these models which contains only the Fermionic zero modes as the hair. It will be interesting to verify this explicitly by an study similar to that done in \cite{Chowdhury:2014yca,Chowdhury:2015gbk} for the ${\cal N}=8$ theory. The observation that the index of horizon states in the canonical compactification on $K3\times T^2$ is positive is worth further study. It should be possible to extend the proof of \cite{Bringmann:2012zr} to higher values of $P^2$. \vspace{.5cm} {\bf Note Added:} As this work was nearing completion, we became aware of the work done in \citep{Chakrabarti:2020ugm}. The analysis of the hair modes done for the CHL orbifolds of $K3$ in section \ref{sechorstate} and \ref{check} overlaps with parts of \cite{Chakrabarti:2020ugm}. \acknowledgments{We thank Ashoke Sen for very useful discussions at several instances over the course of this project which helped us to understand issues related to the positivity of the index. We also thank Jan Manschot for helpful discussions. We thank Amitabh Virmani for discussions and informing us of the conclusions of \cite{Chakrabarti:2020ugm}. The work of A.C is funded by IRC Laureate Award 15175.} \providecommand{\href}[2]{#2}\begingroup\raggedright
proofpile-arXiv_059-15744
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Trajectory prediction has become a major topic of research in computer vision for autonomous driving~\cite{lee2017desire, srikanth2019infer, alahi2016social, marchetti2020memnet, rhinehart2019precog, tang2019multiple, sadeghian2019sophie, choi2019drogon}. The task is of utmost importance, since predicting other agent trajectories permits to avoid danger and plan ego-motion safely. Unfortunately, the autonomous driving datasets required to train prediction models are extremely expensive to gather effectively. Costly data acquisition campaigns are required to obtain large scale vehicle trajectories with context and several sensors are needed: cameras, stereo pairs, LiDARs, IMUs and GPS. Once the campaign is terminated, vehicle trajectories may be estimated by tracking detections and fusing LiDAR measurements \cite{Geiger2012CVPR}. Context is usually provided by remapping image semantic labels~\cite{chen2018deeplab} onto the ground plane, which also requires a LiDAR scan or a depth map. Finally ego-motion estimation is required to register multiple map and trajectory acquisitions over time. Some dataset such as KITTI~\cite{Geiger2012CVPR}, TrafficPredict \cite{ma2019trafficpredict} or Argoverse \cite{chang2019argoverse} are acquired with instrumented cars using LiDAR and multiple cameras. Others are extracted using a multicamera setup, like NGISM \cite{ngsim} which has been collected at an US highway junction. Other than being costly, all these setups for data acquisition are extremely time consuming, requiring either to wait for data collection~\cite{ngsim} or to drive a car in the traffic for hours~\cite{Geiger2012CVPR, ma2019trafficpredict, chang2019argoverse}. This complexity has the effect of limiting the scale of the datasets. Alternative ways to gather trajectory data is to rely on less expensive existing videos lacking sensor annotations and trying to estimate vehicle motion, for instance using SLAM~\cite{murORB2} or replacing sensor data with deep learning methods~\cite{becattini2019vehicle}. These methods however still require high quality videos captured from a moving vehicle. To overcome data acquisition limitations, the use of synthetic datasets has always attracted the interest of deep learning researchers. The potential of using simulated data is the ability to increase the training data at little or no cost, thus making learned models more capable and robust. For instance, GANs have been used to generate synthetic eye imagery to train gaze estimators~\cite{shrivastava2017learning}. Synthetic images have also been used to train detectors in an automotive scenario~\cite{huang2018auggan}. A different take on the problem is to generate completely synthetic data~\cite{saleh2018effective, richter2016playing, Dosovitskiy17}, using advanced game engines. What makes this so compelling is the possibility of controlling the rendering pipeline, which makes it possible to obtain pixel level annotations automatically at no cost. Currently, there is no work addressing the training of trajectory predictors from synthetic data. Differently from images, trajectories are low dimensional, and are in principle easier to generate. Nonetheless, generated trajectories must be framed into a context in order to exploit knowledge about the surrounding environment at inference time. Moreover, trajectory data must be coherent in terms of scale and object dynamics. In this paper we propose a procedural strategy for generating realistic synthetic pairs of trajectories and semantically labeled top-view maps, relying on statistics of existing datasets. Computer graphics researches have often sought procedural methods to generate data~\cite{smelik2014survey}, which does not require costly handcrafting of digital artifacts by visual artists. In the specific case of city maps generation, recent methodologies combine terrain and water data to shape the city map~\cite{chen2008interactive, parish2001procedural}. While these methods enable realistic designs of cities, our goal is slightly different. First, we do not need a whole city to be generated at once, since our prediction model has only access to a limited surrounding. This is in line with a feasible real-world system whose perception is limited by sensor range. It could be argued that the whole city could be generated and cached and then local snapshots could be retrieved. Nonetheless, our methodology allows to create a wider range of possibilities at a faster rate. The efficiency of our model coupled with its random nature makes it suitable for a deep learning training loop. Indeed, we are able to provide newly generated examples at learning time making the training set virtually infinite. The main idea of this work is that roads are born from agents paths. While modern roads are designed from the need to connect locations and to optimize commerce and transport in general, some believe that in certain cases roads originated from men following trails drawn by animals~\cite{helbing2001self} such as the Icknield Way~\cite{icknield}. Relying on this principle it is easier to generate plausible trajectories and build maps around them, rather than generating a map or a city and fitting a plausible motion on it. In addition, data acquired from sensors might not have access to all desired informations, which could instead be acquired in a synthetic or simulated environment. An example of this is occlusion caused by other vehicles, which has been addressed using GANs to generate samples recovering the structure of the layout~\cite{berlincioni2019road, bescos2019empty}. In the case of trajectory data, what can be observed in the real world is only the path taken by a vehicle. Yet, willing to predict its future location, multiple equally likely outcomes might be possible. This information is impossible to capture with sensors, while with synthetically generated data it is possible to offer a rose of possibilities for a single observation. Overall, in this work we study the possibility of augmenting trajectory prediction datasets by generating synthetic data using a Markov Chain with parameters estimated from real data statistics. Our method consistently generates plausible trajectories paired with semantic context maps. Each sample is split into an observed past and a set of possible futures, meaning the observed variable and the variables to be predicted. We show that our synthetic data can help in learning good features and that combined with real data can yield to state of the art results on trajectory prediction benchmarks. The main contributions of this paper are: \begin{itemize} \item We propose a method for estimating a Markov Chain describing vehicle dynamics from real data. This is then used to generate synthetic data to augment trajectory prediction datasets. \item Our generation pipeline allows us to create samples which explicitly address the multimodality of trajectory prediction, i.e. samples with a single past trajectory and multiple future outcomes that cover different roads. \item We propose a prediction model equipped with a recurrent controller that performs an incremental attention over possible future locations. By combining real and synthetic data we demonstrate that our model is able to achieve state of the art results. \item We introduce the novel \textit{Multimodality Loss}, which thanks to the generated multimodal samples, allows us to train the network with direct supervision on each possible future. \end{itemize} \newcommand{0.15\textwidth}{0.15\textwidth} \begin{figure*}[t] \centering \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t176.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t762.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t370.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t468.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t521.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t157.png} \\ \medskip \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t223.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t277.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t300.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t371.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t455.png} \includegraphics[width=0.15\textwidth]{img/icpr/synth_no_predictions/im_t582.png} \caption{Synthetic trajectories in context. Semantic maps are $360\times360$px where a pixel corresponds to 0.5 meters. Purple corresponds to road pixels, pink to sidewalk and black to background. Trajectories are divided into 2s past (red) and multiple 4s futures (green) and are sampled at 10Hz.} \label{img:synthetic_maps} \end{figure*} \section{Synthetic Trajectory Generation} We address the problem of vehicle trajectory prediction from a data-driven point of view. The focus of this work is to augment existing trajectory datasets with synthetic samples which can then be used to train predictive models more effectively. Samples we want to generate are made of two main components: the actual trajectory followed by a vehicle and the context in which it is driving. We identify a trajectory as a sequence of coordinates $(x_i, y_i)$, each divided into two subsets: past coordinates and future coordinates. Past coordinates represent the observed history that a predictive model can observe, i.e. all positions occupied by the vehicle up to a given time identifiable as present, and future coordinates instead represent where the vehicle will go in the near future. A context instead corresponds to a semantic map $c$, where each pixel is labeled with a category such as road or sidewalk. To generate synthetic samples we model roads as paths carved by agents. Based on this idea, we use a random trajectory generator to draw paths and then we create in-scale semantic maps of roads and sidewalks. Our goal is to have a fast method to generate a high variety of maps and trajectories on the fly rather than obtaining a full map of an urban or suburban scenario. In the following we first outline our trajectory generation pipeline and then explain how to build complementary semantic maps. \subsection{Trajectory Generation} \label{sec:trajgen} In order to generate synthetic trajectories we exploit a Markov Chain whose parameters are estimated from real data. The states of the chain correspond to vehicle position offsets from one timestep to the next. We represent offsets in a polar coordinate system with the y axis oriented in the same direction as the vehicle, therefore each state encodes speed and curvature of the vehicle in a given instant. Given an initial random state, the Markov Chain allows to generate a trajectory through subsequent random state transitions. By concatenating all generated offsets we are then able to sample a complete trajectory, making a transition at each timestep. For example, if trajectories are sampled at 10Hz, then after 10 state transitions of the Markov Chain, a trajectory spanning over one second will be generated. To identify the states of the Markov Chain and estimate the transition matrix, we rely on real trajectory data. Given a set of real trajectories in world coordinates $t_i = (x_0, y_0), (x_1, y_1), ..., (x_T, y_T)$ across $T+1$ timesteps, we first compute for each sample the T intermediate offsets in polar coordinates. The radius $\rho_k$ is simply computed as the Euclidean distance between point $k+1$ and $k$ while the angle $\theta_k$ is equal to the change in orientation of the vehicle, i.e. the difference in degrees between the vehicle heading direction at time $k$ and $k+1$. This representation has the advantage of being rotation invariant, since the angles are computed relatively to the forward direction. To obtain a finite and compact set of states, we apply K-means to cluster all offsets. The centroids of the discovered clusters represent an approximation of any possible state in which the vehicle can find itself, based on the training data. We use these centroids as nodes of the Markov Chain. To estimate transitions we find all pairs of subsequent states in the dataset. Theoretically, each state could transition to any other, yet some transitions are physically implausible, i.e. if they imply sudden changes in speed or in steering angles. Each transition from a source state to a destination state is associated with a probability by counting their number of occurrences, normalized by the total number of transitions outgoing from the source node. Such states and transitions define the Markov Chain we use for sampling new trajectories. A synthetic trajectory is built as a sequence of offsets belonging to the clusters encountered while visiting the Markov Chain. At each node a sample from the correspondent cluster is drawn and used to generate the current trajectory offset. This process can be generalized to states that take into account multiple timesteps. In fact, by representing each state with a single cluster, each transition has a limited memory of the past evolution of the trajectory, which may result in erratic patterns. To increase the memory, we simply identify each node in the chain with a sequence of temporally adjacent offsets, each quantized as a cluster centroid. In this way, a transition can be defined as a mapping from a sequence of N displacements occurred at timesteps (-N+1,..., -1, 0) to a sequence of displacements occurred at (-N+2,..., 0, 1), where timestep 0 corresponds to the present. Increasing N though will make the state space grow with a rate of $C^N$ where $C$ is the number of clusters, limiting at the same time the number of samples over which to estimate transition statistics. In our experiments we set N=2 unless differently stated. \subsection{Map Generation} A map $m$ is a tensor of size $H\times W \times C$ representing a top-view context labeled with semantic categories. $H$ and $W$ correspond to the spatial extent of the map and $C$ is the number of semantic classes used to label it. We use 3 classes, encoded as 1-hot vectors in each pixel of the map: road, sidewalk and background. We do not model other categories that can be typically found in urban scenes such as building or vegetation since they do not affect driving patterns. Each map has a granularity of $0.5$ meters per pixel, therefore a context covers an area of $H/2 \times W/2$ meters. Since we are interested in modeling urban scenarios and vehicles move exclusively on roads, context maps are created by generating a set of trajectories and drawing roads around them. We use our trajectory generation pipeline to sample a sufficiently long path and, by adding a thick stroke to it, we are able to define the pixels labeled as road. In the same way, we add sidewalks next to lanes. The width of the stroke defines the width of a lane and its sidewalk. In our experiments we generate maps with lanes approximately 6 meters wide and sidewalks up to 1.5 meters, similarly to regular roads in the real world. To obtain crossroads and forks we generate a new road starting from a random point along the previously generated one. We iterate this process a random number of times $b$, which we refer to as \textit{branching factor}. A higher branching factor leads to more complex scenes, while a branching factor of 1 provides a simple road with no intersections. In our experiments we use a branching factor up to 5. To obtain richer scenarios, we randomly double the width of a whole generated path, indicating that the road has two lanes instead of one. Additional roads, either not connected to the main one or behind the vehicle, are added in the scene to include portions of the map that could potentially be taken by a vehicle, but that the vehicle we want to predict cannot reach. Despite this might seem unnecessary, we show that it helps the learning process of a predictive model, as discussed in Section~\ref{sec:ablation}. Usually, maps similar to ours are obtained by combining LiDAR point clouds acquired the vehicle and semantic segmentation algorithms~\cite{Geiger2012CVPR}. This procedure though leads to noisy maps in regions far away from the sensor, since the point cloud gets sparser when the distance increases. To mimic this, we randomly add noise on map borders by turning road and sidewalk pixels to background. Similarly, borders between categories tend to be noisy and irregular, therefore we randomly vary the width of sidewalks to simulate this effect. \subsection{Multimodal synthetic sample generation} To generate synthetic samples, comprising both a trajectory and its context, we select an $M$-point segment from one of the trajectories that generated the roads. The trajectory segment can then be split into two segments $p$ and $f$ of length $P$ and $F$ respectively, representing the past observation and the future trajectory. The context is created by cropping a map centered in the present point, i.e. the last point of the past. Throughout all experiments we set $P=20$ and $F=40$, which correspond to a 2 seconds past and a 4 seconds future with trajectories sampled at 10Hz, for a total of 6 seconds ($M=6$). The context instead is chosen to have an extent of $360\times360px$ ($180\times180m$). To increase variability we shift trajectories orthogonally to the road by a random offset, with a higher probability to keep them close to the right side, as in an actual driving scenario assuming right-hand driving. Generating synthetic samples has the immediate advantage of augmenting a trajectory prediction dataset. More importantly, simulated data can be generated to explicitly address the multimodality of the task. In fact, predicting the future position of a vehicle bares an intrinsic uncertainty, since multiple equally probably paths might be present, such as before intersections. Trajectory data collected from the real world cannot carry information about this multimodality, since a vehicle can only take a single direction out of the many possible ones. Looking at the problem from a machine learning point of view, we want to learn a function that maps an observation $x$ into one of $K$ multiple outcomes $\{y_i\}_{i=1,...,K}$. In a supervised learning framework, real world data is able to provide a single supervision signal out of $K$. To make this worse, multiple examples might exist with similar observations $x$ and a completely different outcome $y$, which is detrimental to learning. In a simulated environment instead, we can overcome this limitation by imagining several possible outcomes and providing all of them as ground truth to the learning algorithm. To create multimodal trajectories, we simply select points in the future segment from which to initialize new trajectories and sample different transitions from the Markov Chain. By building roads around these trajectories, each encountered intersection will have an associated ground truth and each sample will have a set of possible outcomes. Summarizing, a sample is made of: a semantic map $m$ centered in the present position of the vehicle; the past trajectory of the vehicle $p$; a set of $N_{GT}$ possible futures $f_i$ with $i=1,...,N_{GT}$. Examples of synthetically generated maps with multiple futures are shown in Fig.~\ref{img:synthetic_maps}. \begin{figure*}[htb] \centering \includegraphics[width=0.8\textwidth]{img/model_controller} \caption{Architecture overview. Past trajectory and context map are encoded separately and used as input and initial state of the controller. The controller loops $K$ times and at each iteration performs an attention with the map encoding via dot product. The resulting vector is fed to the decoder which emits a prediction. A diverse future is obtained for each iteration of the controller.} \label{img:model} \end{figure*} \section{Prediction Model} We developed a model specifically tailored to exploit synthetic samples with multimodal ground truth futures (Fig.~\ref{img:model}). The architecture is based on an encoder-decoder structure, which takes as input past trajectories and outputs multiple futures. Our model is equipped with a recurrent controller that at each step performs an attention on context maps, guiding the predictions towards different outcomes. First, separate encoders learn latent representations for past and context. The trajectory encoder is a Gated Recurrent Unit (GRU) and the context encoder a Convolutional Neural Network (CNN). The two encoders are then fed to the controller, also implemented as a GRU. For each timestep, the same past is fed as input, while the context is used to initialize the hidden state. The memory of the GRU stores knowledge about future paths that have already been explored and outputs an attention vector which weighs the context embedding via dot product. The resulting vector is then fed to a final GRU that decodes it into a future prediction. This process is iterated $K$ times, where $K$ is the desired number of futures. The recurrent layers, employed as encoder-decoder and controller, work with sequences on two different abstraction levels. The encoder and the decoder are modeling time, i.e. there is a correspondence between each update of the GRU and an actual timestep in the evolution of the vehicle dynamics. The controller, on the other hand, is modeling the multimodality of possible futures, exploring the semantic map to find possible roads that the vehicle might travel. At the same time, the controller is also modeling different modalities of navigating the same road (e.g. accelerating/decelerating). \subsection{Implementation details} All trajectories fed to the model, both at training and at testing time, are rotated such that the direction of the vehicle in the present follows an upward direction. This is useful since it provides rotation invariance and simplifies the taks. The trajectory encoder network is implemented as a Recurrent Neural Network using a GRU with two layers with a hidden state size of 256. The context encoder instead is a CNN network composed of 4 blocks of covolutional layers with ELu non linearities and a final fully connected layer, as shown in Fig. \ref{img:contextenc}. The context encoder receives multiple crops from the original top-view map and processes each one of them individually. We pick 3 overlapping crops in front of the position of the vehicle at time $t_0$ (the present), which coarsely represent the three main performable maneuvers (turn left, go straight, turn right). The advantage of doing so is to process the context map at a higher resolution without altering the structure of the network. The enconding vectors of each crop are finally concatenated and blended with a final fully connected layer to form a 256-dimensional representation. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{img/contextenc} \caption{Architecture of the context map encoder. Multiple crops of the input map are extracted and encoded independently and successively combined.} \label{img:contextenc} \end{figure} The encodings for trajectory and context share the same dimension since we use them to initialize the controller, which is also implemented as a 256-dimensional GRU: the hidden state is initialized with the context and the trajectory is fed as input. Finally, the trajectory decoder is a GRU with 3 layers and hidden state size of 256, followed by a fully connected layer that maps the output into the 2-dimensional offsets of the future predicted trajectory. The trajectory decoder is also trained with a dropout probability of 0.2. \subsection{Training} \label{sec:training} The presence of the controller generating multiple futures, allows us to take full advantage of the synthetic trajectories, which are paired with several ground truths. In fact each ground truth can serve as supervision and each step of the GRU can be specifically optimized. Usually, to enforce multiple diverse predictions, a \textit{Variety Loss}~\cite{gupta2018social} is used during training. This loss minimizes the Mean Squared Error between the only ground truth and the best prediction out of $K$ (this loss is sometimes referred to as \textit{best-of-K}). The advantage of doing backpropagation only through the best prediction is to avoid a single averaged solution and enforce the model to generate a set of diverse alternatives. This does not happen when optimizing the MSE of all generated futures compared with reference to a single ground truth. Whereas this has often proven effective~\cite{gupta2018social, lee2017desire, srikanth2019infer, marchetti2020memnet}, it exploits only a partial supervision hence a large amount of computation during training is wasted not being used in backpropagation. To overcome this limitation and exploit multiple synthetic ground truths, we introduce a \textit{Multimodality Loss} which optimizes a prediction for each available ground truth. The loss computes pairwise distances between all targets and predictions. Then, it iteratively pairs the trajectories with the minimum distance in order to assign to each future at least a prediction. The first match is provided by the lowest pairwise distance. The paired GT and prediction are then temporarily removed and the process is repeated for all remaining ground truths. If $K>N_{GT}$, i.e. if the number of estimates is higher than the number of ground truths, the remaining predictions are paired to the closest future. In our experiments we use $K=5$ and a variable number of GT futures from 1 to 5. The \textit{Multimodality Loss} allows us to backpropagate the error for each timestep of the controller, thus explicitly instructing the model about all possible future alternatives. We show in Section~\ref{sec:loss_ablation} that our loss provides benefits over existing losses such as MSE and \textit{Variety Loss}. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{img/clusters} \caption{Trajectory offsets from the KITTI dataset in polar coordinates ($\rho, \theta$) clusterized through K-Means into 40 clusters. Different colors represent different clusters.} \label{img:clusters} \end{figure} \section{Experiments} \subsection{Datasets and Metrics} In our experiments we use the KITTI dataset~\cite{Geiger2012CVPR}, which comprehends several data modalities such as calibrated RGB streams, LiDAR 3D point clouds, annotated objects, semantic segmentations and IMU. Here we refer to the tracking dataset, which has often been used for trajectory prediction~\cite{lee2017desire, srikanth2019infer, marchetti2020memnet, marchetti2020multiple}. Despite this, several splits of the dataset have been used across prior work. Here we refer to the split introduced in~\cite{marchetti2020memnet}\footnote{https://github.com/Marchetz/KITTI-trajectory-prediction}, which contains 8613 top-view samples for training and 2907 for testing. Trajectories are divided into 2 seconds past trajectories and 4 seconds future trajectories, while maps have a spatial resolution of 0.5 meters per pixel. Trajectories are samples at 10Hz, therefore there are 20 and 40 points for past and future segments, respectively. As metrics to test the performance of our model we measure the Final Displacement Error (FDE), i.e. the L2 error in meters at a given timestep (sometimes also referred to as Horizon Error), and Average Displacement Error (ADE), i.e. the error in meters averaged over all timesteps. We compare our method against existing state of the art works~\cite{lee2017desire, srikanth2019infer, marchetti2020memnet} and some simpler baselines from~\cite{marchetti2020memnet}, namely a linear regressor, a multi layer perceptron regressor (MLP) and a Kalman Filter~\cite{kalman1960new}. It has to be noted that, due to the different dataset splits, \cite{lee2017desire} and \cite{srikanth2019infer} are not directly comparable to ours and are given as reference. \subsection{Results} To evaluate our model we first generate the states of the Markov Chain by applying K-means on trajectory offsets in polar coordinates extracted from the training set. Since trajectory coordinates in KITTI are acquired with GPS and IMU, they sometimes exhibit noise, especially when a vehicle is moving very slowly or not moving. To remove this noise we filter out offsets with $\rho<0.005$ and $\theta>0.5$ to prevent still vehicles to make sudden sharp turns. As discussed in Section~\ref{sec:ablation}, we found out the optimal number of clusters to be 40. Fig.~\ref{img:clusters} depicts the obtained clusters. We trained three different variants of our method, varying the source of data: only real trajectories from KITTI, only synthetically generated trajectories, both real and synthetic trajectories. All variants are tested on the test set of KITTI, i.e. on real data. Tab.~\ref{tab:results} shows the results obtained by the three methods, compared to prior work. The usage of synthetic data alone is able to provide acceptable results: compared to its counterpart trained with real data, the model performs on par for predictions up to 2 seconds and with an FDE@4s only 0.5 meters worse. This is quite remarkable since no real sample is used to train the model, suggesting that our data generation process is able to approximate realistic samples. This result implies that sampling data using our Markov Chain could augment the existing dataset, thus improving the model without the need of costly data acquisition campaigns. In fact, this is the case when trained with mixed data. Here we use the training set from KITTI in combination with synthetic data. During training we sample approximately 16k synthetic samples, compared to the ~8k real ones, but we keep their ratio balanced in each batch. In this way, the error consistently lowers below the one obtained with real samples. Especially for far prediction horizons, the model is able to improve considerably, surpassing it by 0.7 meters of FDE@4s. In this way, we are able to improve also over existing prior work with the only exception of MANTRA~\cite{marchetti2020memnet} which is still better by a few centimeters at low time horizons. Samples of predicted trajectories are shown in Fig. \ref{img:preds_model}. \begin{table}[] \caption{Average Displacement Error (ADE) and Final Displacement Error (FDE), computed for predictions at different time steps. DESIRE~\cite{lee2017desire} and INFER~\cite{srikanth2019infer} are shown as reference even if not directly comparable due to different dataset splits.} \label{tab:results} \resizebox{\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c||c|c|c|c} & \multicolumn{4}{c||}{\textbf{ADE}} & \multicolumn{4}{c}{\textbf{FDE}} \\%\hline \textbf{Method} & 1s & 2s & 3s & 4s & 1s & 2s & 3s & 4s \\\hline Kalman~\cite{marchetti2020memnet} & 0.51 & 1.14 & 1.99 & 3.03 & 0.97 & 2.54 & 4.71 & 7.41 \\ Linear~\cite{marchetti2020memnet} & 0.20 & 0.49 & 0.96 & 1.64 & 0.40 & 1.18 & 2.56 & 4.73 \\ MLP~\cite{marchetti2020memnet} & 0.20 & 0.49 & 0.93 & 1.53 & 0.40 & 1.17 & 2.39 & 4.12 \\ MANTRA~\cite{marchetti2020memnet} & \textbf{0.17} & \textbf{0.36} & 0.61 & 0.94 & \textbf{0.30} & 0.75 & 1.43 & 2.48 \\ Ours (Synthetic data) & 0.32 & 0.54 & 0.85 & 1.31 & 0.52 & 1.01 & 1.90 & 3.44 \\ Ours (Real data) & 0.31 & 0.53 & 0.78 & 1.24 & 0.51 & 0.95 & 1.63 & 2.95 \\ Ours (Mixed data) & 0.22 & 0.38 & \textbf{0.59} & \textbf{0.89} & 0.35 & \textbf{0.73} & \textbf{1.29} & \textbf{2.27} \\ \hline DESIRE~\cite{lee2017desire} & - & - & - & - & 0.28 & 0.67 & 1.22 & 2.06 \\ INFER~\cite{srikanth2019infer} & 0.56 & 0.75 & 0.93 & 1.22 & 0.81 & 1.08 & 1.55 & 2.46 \\ \hline \end{tabular}} \end{table} \newcommand{0.20\textwidth}{0.20\textwidth} \begin{figure*}[htb] \centering \includegraphics[width=0.20\textwidth]{img/icpr/kitty/810_crop.png} \includegraphics[width=0.20\textwidth]{img/icpr/kitty/391_crop.png} \includegraphics[width=0.20\textwidth]{img/icpr/kitty/920_crop.png} \includegraphics[width=0.20\textwidth]{img/icpr/kitty/2389_crop.png} \\ \smallskip \includegraphics[width=0.20\textwidth]{img/icpr/kitty/1096_crop.png} \includegraphics[width=0.20\textwidth]{img/icpr/kitty/2407_crop.png} \includegraphics[width=0.20\textwidth]{img/icpr/kitty/1830_crop.png} \includegraphics[width=0.20\textwidth]{img/icpr/kitty/65_crop.png} \caption{Outputs of our model trained with mixed data and tested on real data. Past trajectory in red, future trajectory in green and predictions in blue. Purple corresponds to road pixels, pink to sidewalk and black to background.} \label{img:preds_model} \end{figure*} \subsection{Effect of Multimodality Loss} \label{sec:loss_ablation} We investigated the advantage of using our \textit{Multimodality Loss} against standard losses such as \textit{Variety Loss} or simple MSE. As discussed in Sec.~\ref{sec:training}, the advantage of the \textit{Multimodality Loss} is to be able to optimize the network for each generated trajectory, instead of optimizing only the best prediction as the \textit{Variety Loss} would do. On the other hand, one could indeed backpropagate all predictions respect to a single ground truth, but this would lead to a lack of multimodality, generating averaged predictions that try to satisfy all possible likely futures. In Tab.~\ref{tab:losses} we report the results obtained by the model using the three losses. For the \textit{Variety Loss} and the MSE we simply pick one of the possible ground truths and discard the information about the others during training. As expected, the MSE proves not the be suitable for the task as hand, due to its inability to generate diversity. As can be seen, our \textit{Multimodality loss} allows us to lower the error significantly even compared to the \textit{Variety Loss}, being able to effectively cover more future alternatives. \begin{table}[] \caption{Analysis of the effect of different losses during training. Our \textit{Multimodality Loss} outperforms the \textit{Variety Loss} and MSE since it can explicitly address multimodality.} \label{tab:losses} \resizebox{\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c||c|c|c|c} & \multicolumn{4}{c||}{\textbf{ADE}} & \multicolumn{4}{c}{\textbf{FDE}} \\%\hline \textbf{Method} & 1s & 2s & 3s & 4s & 1s & 2s & 3s & 4s \\\hline MSE & 0.35 & 0.68 & 1.16 & 1.81 & 0.59 & 1.42 & 2.75 & 4.68 \\ \textit{Variety Loss} & 0.34 & 0.54 & 0.80 & 1.19 & 0.53 & 0.94 & 1.70 & 3.03 \\ \textit{Multimodality Loss} & \textbf{0.22} & \textbf{0.38} & \textbf{0.59} & \textbf{0.89} & \textbf{0.35} & \textbf{0.73} & \textbf{1.29} & \textbf{2.27} \\ \end{tabular}} \end{table} \subsection{Ablation Studies} \label{sec:ablation} We perform several ablation studies to analyze the importance of specific components in the model architecture and in the data generation process (Tab.~\ref{tab:ablation}). First we trained our model disabling some components in the synthetic data generation process: without simulating LiDAR noise, without shifting trajectories inside lanes and without adding unreachable roads. Turning off the synthetic LiDAR noise slightly lowers the performance of the model. This happens mostly due to vehicles with futures in noisy parts of the map. The model in fact interprets the noise as background and tries to avoid it. Similar results are obtained when all generated trajectories are in the middle of the lane. Training with this data, the model often tends to make predictions drift towards the center of the road instead of following the natural path of the vehicle. A more considerable drop in performance is observed without adding unreachable roads. When maps are generated with possible futures along every visible road, the controller tries to guide predictions towards both reachable and unreachable areas. This may lead to very unnatural predictions, since at test time the predicted paths will often cut through the background in order to reach every visible road. We then tested the effect of using a different Markov Chain to generate trajectories. As explained in Sec.~\ref{sec:trajgen}, we normally use nodes that correspond to pairs of clusters, therefore taking two timesteps into account. We generated a Markov Chain with states composed of a single timestep and retrained the model. The generated samples do not approximate the real data well enough, leading to noisy trajectories that often change direction and speed abruptly. This reflects in a drop of 0.6 meters of FDE@4s as observed in Tab.~\ref{tab:ablation}. What affects the model the most though is the attention mechanism. We trained our model disabling it, making the controller directly feed its output to the decoder. The map encoding is now taken into account only as initial state of the controller, instead of using it to guide individual predictions. This appears to be highly detrimental for the model, since the performance severely drop and the error rises by almost 3 meters at 4 seconds. \begin{table}[] \caption{Ablation study. Our model is compared to variants with: no simulated LiDAR noise; no random trajectory shift across lanes; no unreachable roads; data generated by a Markov Chain with single timestep states; absence of controller.} \label{tab:ablation} \resizebox{\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c||c|c|c|c} & \multicolumn{4}{c||}{\textbf{ADE}} & \multicolumn{4}{c}{\textbf{FDE}} \\%\hline \textbf{Method} & 1s & 2s & 3s & 4s & 1s & 2s & 3s & 4s \\\hline Ours & \textbf{0.22} & \textbf{0.38} & \textbf{0.59} & \textbf{0.89} & \textbf{0.35} & \textbf{0.73} & \textbf{1.29} & \textbf{2.27} \\ No LiDAR noise & 0.23 & 0.40 & 0.62 & 0.92 & 0.37 & 0.75 & 1.34 & 2.35 \\ No trajectory shift & 0.26 & 0.45 & 0.68 & 0.99 & 0.43 & 0.83 & 1.42 & 2.40 \\ No unreachable roads & 0.29 & 0.48 & 0.72 & 1.06 & 0.47 & 0.87 & 1.50 & 2.62 \\ Single chain states & 0.37 & 0.55 & 0.81 & 1.18 & 0.54 & 0.97 & 1.68 & 2.91 \\ No attention & 0.42 & 0.80 & 1.31 & 2.02 & 0.70 & 1.61 & 3.03 & 5.15 \\ \end{tabular}} \end{table} In addition we verified the effect of the number of clusters for K-Means when generating the states for the Markov Chain. Fig.~\ref{img:num_clusters} shows the resulting FDE and ADE at a time horizon of 4 seconds using a number of clusters equal to 20, 40, 60 and 80. It appears that the optimal value is 40 and that the error curve is convex with reference to the number of clusters. The model however is quite robust to changes since the FDE remains under 3 meters for all tested values. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{img/num_clusters} \caption{Results obtained varying the number of clusters in K-Means.} \label{img:num_clusters} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.32\columnwidth, trim={175px 250px 175px 100px},clip]{img/icpr/ablations/nolidar/im_t1937.png} \includegraphics[width=0.32\columnwidth, trim={200px 250px 250px 200px},clip]{img/icpr/ablations/noshift/im_t1193.png} \includegraphics[width=0.32\columnwidth, trim={75px 200px 275px 150px},clip]{img/icpr/ablations/notrick/im_t1096.png}\\ \smallskip \includegraphics[width=0.32\columnwidth, trim={175px 250px 175px 100px},clip]{img/icpr/ablations/good/im_t1937.png} \includegraphics[width=0.32\columnwidth, trim={200px 250px 250px 200px},clip]{img/icpr/ablations/good/im_t1193.png} \includegraphics[width=0.32\columnwidth, trim={75px 200px 275px 150px},clip]{img/icpr/ablations/good/im_t1096.png} \caption{Ablations study samples on real data. Predictions obtained training the model without (top) and with (bottom) adopting synthetic data augmentation strategies: \textit{LiDAR noise} (left); \textit{Trajectory shift} (middle); \textit{Unreachable roads} (right). \vspace{-10px}} \label{img:ablation} \end{figure} \section{Conclusion} In this paper we presented a method to generate synthetic trajectory samples exploiting a Markov Chain with parameters estimated from real data. This has shown two main advantages. First, the possibility to augment existing datasets and train better prediction models. Second, the possibility to couple past observations with multiple ground truths, which allowed us to exploit a new loss to train our model with full supervision and address the intrinsic multimodality of the task. The usage of this technique for generating synthetic data, along with a model specifically tailored for multimodal predictions, has led to state of the art results on the KITTI trajectory prediction benchmark. \section*{Acknowledgments} \footnotesize This work was supported by the European Commission under European Horizon 2020 Programme, grant number 951911 - AI4Media. \bibliographystyle{IEEEtran}
proofpile-arXiv_059-15745
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} One of Rufus Bowen's many contributions to smooth ergodic theory is a construction often referred to as Bowen's eye. It is an example of a flow $\phi^t$ on $\mathbb{R}^2$ with two hyperbolic fixed points $\mathbf{p}$ and $\mathbf{q}$ such that one branch of the stable manifold of $\mathbf{p}$ coincides with one branch of the unstable manifold of $\mathbf{q}$ and vice versa, forming an eye-shaped region between two separatrices. In the interior of this region, there is a repelling fixed point with complex eigenvalues, inducing a spiralling behaviour towards the boundary of the eye. One can show that, if the eigenvalues of $D\phi^1(\mathbf{p})$ and $D\phi^1(\mathbf{q})$ are chosen appropriately, then the Birkhoff averages \[\frac{1}{t} \int_0^t f(\phi^s(\mathbf{x})) \ ds \] diverge as $t \to \infty$ for every $\mathbf{x}$ inside the eye except for the fixed point, whenever $f: \mathbb{R}^2 \to \mathbb{R}$ is a continuous function taking different values at $\mathbf{p}$ and $\mathbf{q}$ (see e.g. Takens \cite{MR1274765}). Bowen's eye is the best known example of what is now called \emph{historic behaviour}: the existence of a positive Lebesgue measure set of initial points for which Birkhoff averages are divergent. Historic behaviour has been found in many different contexts. Herman gave a very simple example of a system exhibiting such a behaviour \cite{herman}. Hofbauer and Keller \cite{hofbauer1990} proved that there are uncountably many quadratic maps with almost everywhere divergent Birkhoff averages. Kiriki and Soma \cite{KIRIKI2017524} have showed that for $r \geq 2$, there is a $C^r$ open set $\mathcal{N}$ of surface diffeomorphisms, and a dense subset $D \subset \mathcal{N}$, such that every $f \in D$ has a wandering domain, giving rise to historic behaviour\footnote{Very recently, Berger and Biebler have anounced in \cite{berger2020emergence} a similar result for the classes $C^\infty$ and $C^\omega$.}. The set $\mathcal{N}$ is obtained by Newhouse's construction of persistent tangencies and contains the H{\'e}non family. Their results have been generalized to higher dimension by Barrientos \cite{2103.11964} and were adapted to flows by Labouriau and Rodrigues \cite{Labouriau_2017}. Saburov \cite{MR4185284} recently found that historic behaviour is abundant among predator-pray dynamics. A transitive partially hyperbolic diffeomorphism on $\mathbb{T}^3$ displaying historic behaviour is given in a recent work of Crovisier, Yang, and Zhang \cite{MR4082180}. In a recent work \cite{2003.02185}, Talebi studies historic behaviour in the setting of rational maps of the Riemann sphere. He has announced that the set of maps with historic behaviour contains a dense $G_\delta$ set in the closure of strictly post-critically finite maps, i.e. maps for which all critical points lie in the pre-orbit of a periodic repeller. In the continuous setting, Abdenur and the first author \cite{Abdenur2013} showed that in the $C^0$ conjugacy class of expanding circle maps, there is a dense $G_\delta$ set of maps with historic behaviour. Although historic behaviour is abundant in some special families of dynamical systems, it is generally believed that it cannot be persistent among smooth maps or flows without any extra structure. However, Ruelle has expressed some hope that such examples may exist \cite{MR1858471} and Takens emphasized it as an important problem in \cite{MR2396607}, whence it has become known as \emph{Taken's last problem.} The present work deals with Taken's last problem for the special family of repara\-metrized linear flows on the torus with two stopping points. In this setting, one must either have historic behaviour or a unique physical measure whose basin has full measure (a dichotomy which does not hold for other dynamical systems). We show that both possibilities occur, depending on the angle of the flow and the relative position of the stopping points, but historic behaviour is the more abundant phenomenon, both from a topological and a measure theoretic point of view. As far as we know, irrational flows with two stopping points have not been studied before. The case of flows with one stopping point, however, has been extensively studied, from the grounding work of Ko\v{c}ergin \cite{MR0516507} to finer results about the mixing rate (e.g. the recent result of Fayad, Forni, and Kanigowski \cite{fayad2019lebesgue}). See the survey of Dolgopyat and Fayad \cite{MR3309100}; in particular the results of the present article are based on studies of Birkhoff sums that give partial answers to Question 41 (this question was already tackled by Sina\u{\i} and Ulcigrai in \cite{MR2478478}). For its part, the study of physical measures for flows on surfaces is a bit more developped, with among others Katok example (e.g. Kwapisz \cite{MR2351022}), the examples of Saghin, Sun and Vargas \cite{MR2670926} and the special attention paid to Cherry flows by Palmisano \cite{palmisano2014physical}, Saghin and Vargas \cite{Saghin_2012} and Yang \cite{YANG_2016}. \begin{remark} The definition of historic behaviour varies in the litterature. Sometimes it is often defined pointwise, so that a point is said to have historic behaviour whenever Birkhoff averages fail to converge for some continuous observable. There has been a recent surge in research about systems for which such historic behaviour occurs on a residual (dense $G_\delta$) set of points \cite{MR4212116, MR4055947, 2107.01200, MR3567830, 2107.12498}. \end{remark} \subsection*{Formulation of the problem and summary of results} Consider a constant vector field $X_0 = (1,\alpha)$ on $\mathbb{T}^2= \mathbb{R}^2 / \mathbb{Z}^2$. We shall always assume that $\alpha$ is irrational (otherwise any reparametrization of the flow is periodic, thus has extremely simple ergodic behaviour). Let $\varphi: \mathbb{T}^2 \to \mathbb{R}$ be a non-negative smooth function that vanishes at exactly two points, $\mathbf{p}, \mathbf{q}$ say. We assume that $\varphi$ is of quadratic order at these points -- by that we mean that its derivative $D\varphi$ vanishes and that the Hessian $D^2 \varphi$ is positive definite at both $\mathbf{p}$ and $\mathbf{q}$. This assumption is quite natural, being the case of lowest codimension\footnote{If the map $\varphi$ is not smooth at $(0,0)$, then the behaviour of the reparametrized flow can be quite different, see e.g. Kwapisz and Mathinson \cite{MR2947933}.}. Let $X = \varphi X_0$ and let $\phi^t$ be the corresponding flow on $\mathbb{T}^2$, which will be referred to as the \emph{reparametrized linear flow}. Such flows are topologically mixing but have zero entropy (see the introduction of Kaginowski \cite{MR3819702} for more informations). Since we assume $\alpha$ to be irrational, the stable sets of $\mathbf{p}$ and $\mathbf{q}$ are densely immersed semi-lines, consisting of those initial points $\mathbf{x}$ for which $\phi^t(\mathbf{x})$ approaches $\mathbf{p}$ or $\mathbf{q}$ as $t$ tends to infinity. All points on none of these stable sets (in particular, points on a set of full Haar measure on $\mathbb{T}^2$) have a dense future orbit under $\phi^t$. The question arises as to what can be said about the time averages of such points. As we shall see in Proposition~\ref{invariant probs better}, the flow $\phi^t$ has no invariant probabilities other than the point masses at $\mathbf{p}$ and $\mathbf{q}$ as well as their convex combinations. Let $\mathcal{M}$ be the set of Borel probabilities on $\mathbb{T}^2$; endowed with the weak-* topology this set becomes compact. The flow $\phi^t$ induces at every point $\mathbf{x} \in \mathbb{T}^2$ a family $\{\mu_\mathbf{x}^t \}_{t>0}$ of what we may call empirical measures, given by \begin{equation}\label{EqDefPhi} \int_{\mathbb{T}^2} f \ d\mu_\mathbf{x}^t = \frac{1}{t} \int_0^t f(\phi^s(\mathbf{x})) \ ds \end{equation} for every continuous $f: \mathbb{T}^2 \to \mathbb{R}$. Let $p\omega(\mathbf{x})$ be the compact subset of $\mathcal{M}$ defined by \[p\omega(\mathbf{x}) = \bigcap_{T > 0} \overline{ \{\mu_\mathbf{x}^t: t \geq T \} }. \] We denote by $\boldsymbol{\lambda}$ the Haar measure (also referred to as Lebesgue measure) on $\mathbb{T}$ and by $\la^2 = \boldsymbol{\lambda} \times \boldsymbol{\lambda}$ the Haar measure (also referred to as Lebesgue measure) on $\mathbb{T}^2$. Although $\la^2$ is not invariant under the flow $\phi^t$, it is still ergodic in the sense that if $A$ is a Borel measurable set such that $\phi^t(A) = A$ for every $t \in \mathbb{R}$, then $\la^2(A)$ is either $0$ or $1$ (because $\phi^t$ is almost everywhere orbit equivalent to the linear flow $\phi_0^t$ associated to the vector field $X_0$). As a consequence, $p\omega(\mathbf{x})$ is $\la^2$-almost everywhere constant (see Proposition~\ref{invariant probs better}). More precisely, let \begin{equation}\label{EqFormPhys} \mu_\infty := \frac{\sqrt{d_\mathbf{q}}}{\sqrt{d_\mathbf{p}} + \sqrt{d_\mathbf{p}}} \delta_\mathbf{p}\, + \, \frac{\sqrt{d_\mathbf{p}}}{\sqrt{d_\mathbf{p}} + \sqrt{d_\mathbf{q}}} \delta_\mathbf{q}, \end{equation} where $d_\mathbf{p}$ and $d_\mathbf{q}$ are the determinants of the Hessian at $\mathbf{p}$ and $\mathbf{q}$ and $\delta_\mathbf{p}$, $\delta_\mathbf{q}$ the point mass measures at these points. We have the following dichotomy: Given a triple $\mathbf{p}, \mathbf{q}, \alpha$ and a reparametrerized linear flow $\phi^t$ with stopping points at $\mathbf{p}$ and $\mathbf{q}$, then (see Proposition~\ref{PropPossibOmega}) \begin{enumerate}[(i)] \item either Birkhoff averages $\mu_\mathbf{x}^t$ are $\la^2$-almost everywhere divergent (i.e. $\operatorname{card} p\omega(\mathbf{x})\ge 2$ a.e.); \item or $\mu_\mathbf{x}^t$ converges to $\mu_\infty$ for $\la^2$-almost every $\mathbf{x} \in \mathbb{T}^2$. \end{enumerate} In case (i) we say that $\phi^t$ has \emph{historic behaviour}, and in case (ii) we say that $\phi^t$ has a \emph{physical measure}. If, in case (i), it so happens that \[p\omega(\mathbf{x}) = \{ \alpha \delta_\mathbf{p} + (1-\alpha) \delta_\mathbf{q}: \ 0 \leq \alpha \leq 1 \} \] for $\la^2$-almost every $\mathbf{x} \in \mathbb{T}^2$, then we say that $\phi^t$ has a \emph{extreme historic behaviour}. \begin{remark} More generally, a measure $\mu$ is called \emph{physical} if its basin $B(\mu) = \{\mathbf{x}: p\omega(\mathbf{x}) = \{\mu \} \}$ has positive $\la^2$-measure. Some dynamical systems have several physical measures. In our setting, whenever we have a physical measure, it is unique, and its basin has full $\la^2$-measure. A curious feature of irrational flows of category (ii) is that they give rise to physical measures that are not ergodic. This is a rare phenomenon for transitive systems which, to our knowledge, has only been found once before, in Saghin, Sun and Vargas \cite{MR2670926} (see Mu\~{n}oz, Navas, Pujals, and V\'{a}squez \cite{MR2373211} for a non transitive interesting example). \end{remark} The question arises as to what choices of $\alpha$ and stopping points $\mathbf{p}$, $\mathbf{q}$ give rise to historic behaviour, and what choices result in a physical measure \begin{hypotheses*} \label{SH} Throughout this work, a reparameterized linear flow with angle $\alpha$ and stopping points at $\mathbf{p}$ and $\mathbf{q}$ always refers to a flow $\phi^t$ generated by a vector field $X=\varphi X_0$ with the following properties: \begin{enumerate} \item $X_0 = (1,\alpha)$ and $\alpha$ is irrational; \item $\varphi: \mathbb{T}^2 \to \mathbb{R}_+$ is of class $C^3$; \item $\varphi$ vanishes at two distinct points $\mathbf{p}$ and $\mathbf{q}$, and is positive elsewhere (hence $D \varphi (\mathbf{p}) = D \varphi(\mathbf{q}) = 0$); \item the Hessians $D^2 \varphi (\mathbf{p})$ and $D^2 \varphi (\mathbf{q})$ are positive definite matrices. \end{enumerate} \end{hypotheses*} \medskip In this paper we get a quite complete set of criteria, expressed in terms of Diophantine approximation properties, under which a reparametrized flow satisfying (SH) has historic behaviour or a unique physical measure. Our first result says that in most cases (from both a measure-theoretic and topological viewpoint), the reparametrized flow satisfying (SH) has historic behaviour. Still, there are nontrivial cases where there is a unique physical measure. The following statement is a combination of Theorems \ref{generic distinct orbits}, \ref{PropDivSum} and \ref{physmeas different orbits}. \begin{theo}\label{TheoIntro1} There are subsets $\mathcal{F}, \mathcal{R}, \mathcal{D} \subset \mathbb{R} \times \mathbb{T}^2 \times \mathbb{T}^2$ with the following characteristics: $\mathcal{F}$ is of full Lebesgue measure, $\mathcal{R}$ is a dense $G_\delta$ set, and $\mathcal{D}$ is dense (but not $G_\delta$), such that for any $(\alpha, \mathbf{p}, \mathbf{q}) \in \mathbb{T} \times\mathbb{T}^2\times\mathbb{T}^2$ and any $\phi^t$ be a reparameterized linear flow satisfying (SH) with angle $\alpha$ and stopping points at $\mathbf{p}$ and $\mathbf{q}$. Then $\phi^t$ has \begin{itemize} \item a unique physical measure if $(\alpha, \mathbf{p}, \mathbf{q}) \in \mathcal{D}$, \item historic behaviour if $(\alpha, \mathbf{p}, \mathbf{q}) \in \mathcal{F}$, and \item an extreme historic behaviour if $(\alpha, \mathbf{p}, \mathbf{q}) \in \mathcal{R}$. \end{itemize} In the first case, the physical measure is given by (\ref{EqFormPhys}). \end{theo} All the sets of angles (i.e. the projections of $\mathcal{F}$, $\mathcal{R}$ and $\mathcal{D}$ to $\mathbb{R}$) in Theorem~\ref{TheoIntro1} are explicit in the sense that they arise from conditions involving quantities that are determined by the expression of the angle as a continued fraction. Let us be a little more precise: The set $\mathcal{F}$ can be written as $A\times \mathbb{T}^2\times \mathbb{T}^2$, with $A\subset\mathbb{T}$ of full measure. We also prove that that the first case (i.e. $(\alpha, \mathbf{p}, \mathbf{q}) \in \mathcal{D}$) occurs in two ways, namely with $\mathbf{p}$ and $\mathbf{q}$ in the same orbit (of the non-reparameterized flow) as well as $\mathbf{p}$ and $\mathbf{q}$ in different orbits. Both situations occur densely in the phase space $\mathbb{R} \times \mathbb{T}^2 \times \mathbb{T}^2$, and there exists a full Hausdorff dimensional set $B\subset \mathbb{R}$ such that for any $\alpha\in B$, there exists $\mathbf{p},\mathbf{q}\in\mathbb{T}^2$ such that $(\alpha,\mathbf{p},\mathbf{q})\in\mathcal{D}$. We give special attention to extreme historic behaviour in the case where $\mathbf{p}=(0,0)$ and $\mathbf{q}=(0,\beta)$ for some rational $\beta$ (see Theorem~\ref{refined rational distinct orbits} for a refined version). \begin{theo}\label{rational distinct orbits} There exists a dense $G_\delta$ set $A \subset \mathbb{R}$ such that if $\alpha \in A$, $\beta \in \mathbb{Q} \setminus \{0\}$, and if $\phi^t$ is a reparametrized linear flow satisfying (SH) with angle $\alpha$ with stopping points at $(0,0)$ and $(0, \beta)$, then $\phi^t$ has an extreme historic behaviour. \end{theo} Finally, we describe more specifically what happens in the case where the singularities $\mathbf{p}$ and $\mathbf{q}$ lie on the same orbit. The results are described by this combination of Theorems \ref{PropConv} and \ref{PropDivSum}. \begin{theo}\label{TheoIntro2} There exists a full measure set $\mathcal{A} \subset \mathbb{R}$, and a full Hausdorff dimensional dense set $\mathcal{B} \subset \mathbb{T}$ such that if $\alpha\in \mathbb{R}$ and $\mathbf{q} = \mathbf{p} + r(1, \alpha) \mod \mathbb{Z}^2$ for some $r>0$, then for any reparameterized linear flow $\phi^t$ satisfying (SH), \begin{itemize} \item $p\omega(\mathbf{x}) = [\mu_\infty, \delta_\mathbf{p}]$ $\mathbf{x}$-a.e. if $\alpha\in \mathcal{A}$ (historic behaviour); \item $p\omega(\mathbf{x}) = \{\mu_\infty\}$ $\mathbf{x}$-a.e. if $\alpha\in \mathcal{B}$, $\mathbf{x}$-a.e. (physical measure). \end{itemize} \end{theo} (Here $[\mu_\infty, \delta_\mathbf{p}]$ denotes the set $\{\alpha \mu_\infty + (1-\alpha) \delta_\mathbf{p} : \ 0 \leq \alpha \leq 1 \}$ of convex combinations of $\mu_\infty$ and $\delta_\mathbf{p}$.) As before, the sets $\mathcal{A}$ and $\mathcal{B}$ are given by explicit Diophantine conditions. Some simulations relative to this theorem can be found in Figure~\ref{simul2}. \begin{figure}\label{simul2} \noindent \includegraphics[width=.45\linewidth]{Proportion_alpha_0.3225445680913981_T_1000000000.png}\hfill \includegraphics[width=.45\linewidth]{Proportion_alpha_0.41421356237309515_T_1000000000.png} \caption{Simulations of the proportion of time spent by the flow $\phi^t$ in some fixed small neighbourhood of $\mathbf{p}$ depending on $\log_{10}$ of the time (i.e. the right of the graphics with abscissa 9 corresponds to time $10^9$). More precisely, these are simulations of a single orbit starting at point $M=(0.6319874,\,0.3684641)$ of the map $M\mapsto M+\delta\varphi(M)(1,\alpha)$, with $\delta=0.1572348$ and $\alpha=4/13+2/135+1/26\,714+2/166\,267\,121$ (left) resp. $\alpha=\sqrt{2}-1$ (right), and $\varphi(M) = \min(\|M-\mathbf{p}\|_2, \|M-\mathbf{q}\|_2)$ for $\mathbf{p}=(0.25,0.75)$ and $\mathbf{q}=\phi^{8.357}(\mathbf{p})$. It is not clear if our theorems' predictions can be observed here or not: as the $\alpha$ for the left graphic is ``Liouville-like'' (at least for the times considered in the simulations) and the right one is Diophantine (it is of bounded type), from Theorems \ref{PropConv} and \ref{PropDivSum}, the left graphic should eventually oscillate between two different values and the right one should converge to $1/2$.} \end{figure} Let us say a few words about the global strategy for proving these theorems. By considering a Poincar{\'e} section, we reduce the study to the one of Birkhoff sums $S_n(x)$ of points of $\mathbb{T}$ under the rotation $R_\alpha$ for the observable $\|x\|^{-1} = d(x,\mathbb{Z})^{-1}$ (Proposition~\ref{criterium1}). More precisely, \begin{itemize} \item If for almost any $x$, one has $\|R_\alpha^n(x)\|^{-1} = o(S_n(x))$, then the system has a unique physical measure; \item If for almost any $x$, one has $\|R_\alpha^n(x)\|^{-1} \neq o( S_n(x))$, then the system has an historic behaviour. \end{itemize} Roughly speaking, one wants to decide whether the orbit of most of points $x$ eventually come very close to 0 or not (close enough to kill all the previous contributions made by $\|\cdot\|^{-1}$ to the Birkhoff sums). \subsection*{Irrational flows with more than two stopping points} Let us say a few words about the case of more than 2 stopping points by pointing out some direct consequences of our theorems in the case of three stopping points $\mathbf{p},\mathbf{q}$ and $\mathbf{r}$. By a trivial generalization of Proposition~\ref{criterium1} to the case of more than two stopping points, if a flow with angle $\alpha$ with stopping points $\mathbf{p}$ and $\mathbf{q}$ has a physical measure, and a flow with angle $\alpha$ with stopping points $\mathbf{q}$ and $\mathbf{r}$ has a physical measure, then a flow with angle $\alpha$ with stopping points $\mathbf{p},\mathbf{q}$ and $\mathbf{r}$ also has a physical measure. This gives generalizations of Theorems~\ref{simple physical measure}, \ref{PropConv} and \ref{physmeas different orbits}, in particular the set of parameters of flows with $N$ stopping points contains a dense subset made of those with a unique physical measure. Similarly, if a flow with angle $\alpha$ with stopping points $\mathbf{p}$ and $\mathbf{q}$ has a historic behaviour, and a flow with angle $\alpha$ with stopping points $\mathbf{q}$ and $\mathbf{r}$ has a historic behaviour, then a flow with angle $\alpha$ with stopping points $\mathbf{p},\mathbf{q}$ and $\mathbf{r}$ also has a historic behaviour. This allows to generalize Theorems~\ref{refined rational distinct orbits}, \ref{generic distinct orbits} and \ref{PropDivSum} to get flows with multiple stopping points and historic behaviour. However, the generalization of the notion of extreme historic behaviour is unclear. In the case of three stopping points, the set of invariant measures is the simplex spanned by $\delta_\mathbf{p}$, $\delta_\mathbf{q}$ and $\delta_{\mathbf r}$. An in-depth look at the proofs of Theorems~\ref{refined rational distinct orbits} and \ref{generic distinct orbits} would probably lead to the fact that on a full measure set of initial conditions $\mathbf{x}$, the segments $[\delta_\mathbf{p},\delta_\mathbf{q}]$ and $[\delta_\mathbf{q},\delta_{\mathbf r}]$ and $[\delta_{\mathbf r},\delta_\mathbf{p}]$ are included in $p\omega(\mathbf{x})$, but this is only the boundary of the simplex. this leads to the following question. \begin{question} Consider an irrational flow with three stopping points with parameters $\alpha,\mathbf{p},\mathbf{q},\mathbf r$. Is $p\omega(\mathbf{x})$ equal to the whole simplex spanned by $\delta_\mathbf{p}$, $\delta_\mathbf{q}$ and $\delta_{\mathbf r}$ for a.e. $\mathbf{x}$ and a full measure set of parameters $(\alpha,\mathbf{p},\mathbf{q},\mathbf r)$? For a generic set of parameters $(\alpha,\mathbf{p},\mathbf{q},\mathbf r)$? If not, what is the dimension of $p\omega(\mathbf{x})$? \end{question} Such a result would need much deeper techniques as the ones developed in the present paper, as we would probably have to determine the whole set of accumulation points of $\Theta_k^\beta(x)$ (see \eqref{DefTheta}) instead of just proving that it is large enough for a big set of points $\mathbf{x}$, moreover taking into account the interplay between the points $\mathbf{p},\mathbf{q}$ and $\mathbf r$. \begin{remark} By taking a product of the time $t$ map of the flow flow $\phi^t$ for a small but non-zero $t$ with the Arnold cat map \[A: (x,y) \mapsto (2x+y, x+y) \mod \mathbb{Z}^2\quad \text{ for } (x,y) \in \mathbb{T}^2,\] we obtain a partially hyperbolic diffeomorphism on $f = \phi^t \times A:\mathbb{T}^4 \to \mathbb{T}^4$. It is straightforward to see that the time $t$ map of an irrational flow with stopping points is topologically mixing, and that the product of two topologically mixing maps is itself topologically mixing. Thus any $f$ obtained in this way is topologically mixing. It turns out that maps of this form have rather unusual ergodic properties that are worthwhile pointing out. Suppose, first, that $\phi^t$ has a unique physical measure. Then $f$ has a unique, non-ergodic physical measure. Moreover, the center Lyapunov exponents are zero for \emph{every} $x \in \mathbb{T}^4$ and $f$ is mixing. We do not know of any such example in the literature. Next suppose that $\phi^t$ has historic behaviour. Then so has $f = \phi^t \times A$. To our knowledge, it is the first topologically mixing example of this kind. A transitive example on $\mathbb{T}^3$ was given by Crovisier, Yang, and Zhang in \cite{MR4082180}. \end{remark} \subsection*{Outline of the paper} In Section \ref{special flows}, we relate the ergodic behaviour of most of orbits of the reparametrized linear flow satisfying (SH) with some related special flow obtained as a suspension flow over a rotation. This allows us to get asymptotics of return times to a Poincar{\'e} section, in terms of the Hessian determinants at the stopping points. This interpretation in terms of suspension flow allows us, in Section~\ref{SecInv}, to get estimates on return times in terms of Birkhoff sums for the non-integrable observable\footnote{Similar Birkhoff sums are studied by Sina\u{\i} and Ulcigrai \cite{MR2478478}, but with $x^{-1}$ instead of $\|x\|^{-1}$, which allows the authors to use cancellations between the positive and negative parts of the observable.} $\|\cdot\|^{-1}$. Using these estimates, we get an exact formula relying the set $p\omega(\mathbf{x})$ of limit measures of $\mathbf{x}\in\mathbb{T}^2$ with the asymptotic behaviour of some quantity defined from Birkhoff sums (Proposition~\ref{criterium1}). Using some symmetry properties of this quantity, we then get criteria for historic behaviour/physical measure (Subsection~\ref{SubsecSym}). Section~\ref{PartRotations} is devoted to some reminders about properties of circle rotations, their renormalizations, linked with the continued fraction of the angle. Section~\ref{SecTech} is quite technical: we get some crucial bounds (from above and below) for Birkhoff sums for the observable $\|\cdot\|^{-1}$, using in particular comparison of the orbits with a rational rotation. This section is used in the four last sections of the paper, each one of which being aimed to prove a part of Theorems~\ref{TheoIntro1}, \ref{rational distinct orbits} and \ref{TheoIntro2}. Note that the last section uses the proof strategy of the previous one (Section~\ref{SecPhysSame}). \section{Special flows} \label{special flows} \subsection{Definition and notations}\label{SecDefFlow} Fix $\varphi$ and $X$ as described in the introduction, i.e. $X = \varphi X_0$ where $X_0 = (1,\alpha)$ is a constant vector field on $\mathbb{T}^2$ and $\varphi: \mathbb{T}^2 \to \mathbb{R}$ is a non-negative smooth function that vanishes at exactly two points $\mathbf{p}$ and $\mathbf{q}$. Let $\phi_0^t$ and $\phi^t$ be the flows of $X_0$ and $X$ respectively. Fix some $x_0 \in \mathbb{T}$ such that $\mathbf{p}, \mathbf{q} \notin \Sigma \stackrel{\text{def.}}{=} \{x_0\} \times \mathbb{T}$ (with $\mathbb{T} = \mathbb{R}/\mathbb{Z}$). Then $\Sigma$ is a transverse section of the flow $\phi^t$. Let $(x_0,p_0)$ be the unique point on $\Sigma$ such that $\phi_0^t(x_0, p_0) = \mathbf{p}$ for some $t \in (0, 1)$; define $q_0$ analogously. In other words, $(x_0,p_0)$ is the only point of $\Sigma$ satisfying $\phi^t(x_0,p_0) \notin \Sigma$ for all $t>0$ and $\lim_{t \to \infty} \phi^t(x_0,p_0) = \mathbf{p}$. Likewise for $(x_0,q_0)$. We say that $\mathbf{p}$ and $\mathbf{q}$ \emph{lie on the same orbit} if they belong to the same orbit of the flow $\phi_0^t$. This is equivalent to say that there is some point $\mathbf{x}$ such that $\lim_{t \to - \infty} \phi^t(\mathbf{x}) = \mathbf{p}$ and $\lim_{t \to \infty} \phi^t(\mathbf{x}) = \mathbf{q}$ or vice versa. Note that $\mathbf{p}$ and $\mathbf{q}$ lie on the same orbit if and only if $p_0$ and $q_0$ lie on the same orbit under the rotation \begin{align*} R_\alpha: \mathbb{T} & \to \mathbb{T} \\ y & \mapsto y +\alpha \mod 1. \end{align*} Note that we can (and do) always choose $x_0$ so that $p_0 \neq q_0$. Indeed if $\mathbf{p}$ and $\mathbf{q}$ are not on the same orbit, this is always the case. If $\mathbf{p}$ and $\mathbf{q}$ are on the same orbit, it suffices to choose $x_0$ so that $\Sigma$ intersects the orbit that joins $\mathbf{p}$ and $\mathbf{q}$ (indeed, $p_0=q_0$ implies that $\Sigma$ does not intersect the orbit between $\mathbf{p}$ and $\mathbf{q}$). Let $Y = \mathbb{T} \setminus \{p_0, q_0 \}$. We define a return time map $T: Y \to \mathbb{R}$ (also called \emph{roof function} in the sequel) by \begin{equation}\label{EqDefTau} T(y) = \min\big\{t>0: \phi^t(x_0,y) \in \Sigma \big\}. \end{equation} Let \[D_0 = \big\{(u,t) \in Y\times\mathbb{R}: 0 \leq t \leq T(u)\big\} \] and \[D = D_0/\sim,\qquad \text{where}\ (u,T(u))\sim(R_\alpha(u),0).\] This allows to define a map $\Xi:D \to \mathbb{T}^2$ by \[\Xi(u,t) = \phi^t(x_0, u).\] \begin{figure} \begin{tikzpicture}[scale=4.5] \clip(-.2,-.2) rectangle (1,.7); \fill[color=black, opacity=.1] (0,0) -- plot[domain=0:1, scale=1, samples=80] (\mathbf{x},{0.03*max(1/abs(\mathbf{x}-.3),1/abs(\mathbf{x}-1.3)) + 0.03*max(1/abs(\mathbf{x}-.7),1/abs(\mathbf{x}+.3))}) -- (1,0) -- cycle; \draw[color=blue, thick, ->] (.4,.1) node{$\bullet$} node[below]{$u$} -- (.4,.2); \draw[color=blue, thick] (.4,.2) -- (.4,.4); \draw[color=blue, thick, ->] (.85,0) -- (.85,.15); \draw[color=blue, thick] (.85,.15) -- (.85,.27); \draw[thick] (0,0) -- plot[domain=0:1, scale=1, samples=80] (\mathbf{x},{0.03*max(1/abs(\mathbf{x}-.3),1/abs(\mathbf{x}-1.3)) + 0.03*max(1/abs(\mathbf{x}-.7),1/abs(\mathbf{x}+.3))}) -- (1,0) -- cycle; \draw[color=red, thick] plot[domain=0:1, scale=1, samples=80] (\mathbf{x},{0.03*max(1/abs(\mathbf{x}-.3),1/abs(\mathbf{x}-1.3)) + 0.03*max(1/abs(\mathbf{x}-.7),1/abs(\mathbf{x}+.3))}); \draw[dotted] (.3,1) -- (.3,.03); \draw (.3,.03) -- (.3,-.05) node[below]{$p_0$}; \draw[dotted] (.7,1) -- (.7,.03); \draw (.7,.03) -- (.7,-.05) node[below]{$q_0$}; \draw[color=green!60!black,->] (.4,-.025) --node[midway, below]{$\alpha$} (.85,-.025); \end{tikzpicture} \caption{The flow $\Xi$: given a point $u\in D$, the flow $\Xi(u,t)$ (in blue) is the unitary vertical flow in $D$, given by the identifications $(v,T(v))\sim (R_\alpha(v),0)$.} \end{figure} Note that the image of $\Xi$ is $\mathbb{T}^2$ minus two line segments going from the points $\mathbf{p}$ and $\mathbf{q}$ to $\Sigma$. The map $\Xi$ induces a family of measurable maps $\Psi^t = \Xi^{-1} \phi^t \Xi$ on $D$. The family $\Psi^t$ is called a \emph{special flow} on $D$ with \emph{base} $R_\alpha$ and \emph{roof function} $T$ (see \cite{Special}). Note that flow lines are vertical in restriction to the fundamental domain $D_0$. We shall see in Section \ref{assymptotic} that, due to the quadratic order of the speed function $\varphi$, the roof function $T$ has two cusps of order $\| x-p_0\|^{-1}$ and $\| x-q_0 \|^{-1}$ in a neighbourhood of the two points $p_0$ and $q_0$ where it is undefined (Proposition~\ref{return times}). In particular, the roof function is not integrable. \subsection{The asymptotic behaviour of return times} \label{assymptotic} In this section we estimate the return times $T$ of points to the transverse section $\Sigma$ for the vector field $X$. Our goal is to prove that it only depends on the local behaviour of $\varphi$ around the singularities. We start by estimating a quantity $\kappa$ similar to $T$, for a local quadratic model and a horizontal flow. \begin{lemma}\label{bdd diff lemma} Consider a horizontal vector field $X=(\varphi(x,y),0)$ in $\mathbb{R}^2$, where $\varphi$ is a positive definite quadratic form $\varphi(x,y) = a x^2 + 2 b x y + c y^2$ with determinant $d= ac-b^2$. Denote by $\phi^t$ the flow associated to $X$. Fix some $\delta>0$ and, for $y \neq 0$, let $\kappa(y)$ be defined by \[\phi^{\kappa(y)}(-\delta, y) = (\delta, y).\] Then \begin{equation}\label{bounded difference} \kappa(y) = \frac{\pi}{\sqrt{d} |y|} + \gamma(y) \end{equation} for some bounded function $\gamma$. \end{lemma} \begin{proof} The non-boundedness of $\kappa$ only occurs in the neighbourhood of $0$, so that one can reduce its study to that on a bounded set of $\mathbb{R}$. Using the method of separation of variables, we see that \[\kappa(y) = \int_{-\delta}^{\delta} \frac{dx}{\varphi(x,y)}.\] Note that $\kappa(y)$ differs form \[\kappa_0(y) = \int_{(-\delta-by)/a}^{(\delta-by)/a} \frac{dx}{\varphi(x,y)} \] by a bounded function, so it suffices to prove (\ref{bounded difference}) with $\kappa_0$ in place of $\kappa$. Moreover, by symmetry, it suffices to consider the case where $y>0$. A direct calculation gives \begin{align*} \kappa_0(y) & = \int_{(-\delta-by)/a}^{(\delta-by)/a} \frac{dx}{a x^2 + 2bxy+cy^2} = \left[ \frac{1}{y \sqrt{d}} \arctan \left(\frac{ax+by}{y\sqrt{d}} \right) \right]_{(-\delta-by)/a}^{(\delta-by)/a} \\ & = \frac{2}{y \sqrt{d}} \arctan\left(\frac{\delta}{y \sqrt{d}}\right). \end{align*} Recall that $\arctan(x) + \arctan(\frac{1}{x}) = \frac{\pi}{2}$ for every $x>0$. Therefore \[\kappa_0(y) = \frac{\pi}{y \sqrt{d}} - \frac{2}{y \sqrt{d}} \arctan \left( \frac{y \sqrt{d}}{\delta} \right) ,\] and the proof follows readily since the last term is bounded in $y$. \end{proof} \begin{lemma}\label{intble diff lemma} Consider a horizontal vector field $X=(\varphi(x,y),0)$ in $Q=(-1,1)^2$, where $\varphi:Q \to \mathbb{R}$ is a non-negative $C^3$ function vanishing at $(0,0)$ and strictly positive elsewhere. Suppose that the Hessian $D^2 \varphi(0,0)$ is positive definite and write $d = \det(D^2 \varphi(0,0))$. Denote by $\phi^t$ the flow associated to $X$. Fix some $0< \delta< 1$ and, for $y \neq 0$, let $\kappa(y)$ be defined by \[\phi^{\kappa(y)}(-\delta, y) = (\delta, y).\] Then \begin{equation} \label{intble difference} \kappa(y) = \frac{\pi}{|y| \sqrt{d}} + \sigma(y) \end{equation} for some integrable function $\sigma$. \end{lemma} \begin{proof} Let $A = D^2 \varphi (0,0)$ and let $0< \lambda_1 \leq \lambda_2$ be its eigenvalues. To simplify notation we write $(x,y)$ as $\mathbf{x}$. Recall that \[ \lambda_1 \| \mathbf{x} \|^2 \leq \mathbf{x}^T A \mathbf{x} \leq \lambda_2 \| \mathbf{x} \|^2 \] for every $\mathbf{x} \in \mathbb{R}^2$. Since $\varphi$ is $C^3$ we can write $\varphi(\mathbf{x}) = \mathbf{x}^T A \mathbf{x} + R(\mathbf{x})$, where \begin{equation}\label{taylor with rest} |R(\mathbf{x})| \leq K \| \mathbf{x} \|^3 \end{equation} for some $K>0$ in a neighbourhood of $\mathbf{0}$. Just like in the proof of Lemma~\ref{bdd diff lemma}, we have \[ T(y) = \int_{-\delta}^{\delta} \frac{dx}{\varphi(\mathbf{x})} = \int_{\delta}^{\delta} \frac{dx}{\mathbf{x}^T A \mathbf{x} + R(\mathbf{x})}.\] We know from Lemma~\ref{bdd diff lemma} that \[\int_{-\delta}^{\delta} \frac{dx}{\mathbf{x}^T A \mathbf{x}} \] differs from $\pi/(|y| \sqrt{d})$ by a bounded function. Therefore, in order to prove Lemma~\ref{intble diff lemma}, it suffices to show that \[\sigma(y) = \int_{-\delta}^{\delta} \frac{1}{\mathbf{x}^T A \mathbf{x}}-\frac{1}{\mathbf{x}^T A \mathbf{x} + R(\mathbf{x})} dx = \int_{-\delta}^\delta \frac{R(\mathbf{x}) dx}{\mathbf{x}^T A \mathbf{x} (\mathbf{x}^T A \mathbf{x} + R(\mathbf{x}))}\] is integrable. Note that $\sigma$ is continuous away from $y=0$, so it suffices to show that $\int_{-\delta}^{\delta} |\sigma(y)| dy < \infty$. Note also that changing the value of $\delta$ produces a change in $\sigma$ by a bounded amount. Thus, upon possibly reducing $\delta$, we can (and do) suppose that $K \| \mathbf{x} \|^3 \leq \frac{\lambda_1}{2} \| \mathbf{x} \|^2$ in $(-\delta, \delta)^2$. Consequently, for every $y \in (-\delta, \delta) \setminus \{0\}$, we have \begin{align*} |\sigma(y)| & \leq \int_{-\delta}^{\delta} \frac{|R(\mathbf{x})| dx}{\mathbf{x}^T A \mathbf{x} (\mathbf{x}^T A \mathbf{x} - |R(\mathbf{x})|)} \\ & \leq \int_{-\delta}^{\delta} \frac{K \| \mathbf{x} \|^3 dx}{\lambda_1 \| \mathbf{x} \|^2 (\lambda_1 \|\mathbf{x} \|^2 - K \| \mathbf{x} \|^3)} \\ & \leq \frac{2K}{\lambda_1^2} \int_{\delta}^{\delta} \frac{dx}{\| \mathbf{x} \|}\\ & = \frac{2K}{\lambda_1^2} \int_{-\delta}^{\delta} \frac{dx}{\sqrt{x^2+y^2}} \\ & = \frac{4K}{\lambda_1^2} \left( \log \big( \delta+ \sqrt{\delta^2+y^2}\big)-\log |y| \right). \end{align*} In particular $\int_{-\delta}^\delta |\sigma(y)| \ dy < \infty$. \end{proof} Lemma \ref{intble diff lemma} tells us roughly how much a horizontal flow is slowed down near a stopping point at the origin. In order to apply it to the stopping points $\mathbf{p}, \mathbf{q}$ of an irrational flow of the torus, we need to perform a change of coordinates. As we shall see, this change of coordinates is equivalent to changing the Hessian of $\varphi$ in a way that does not affect its determinant. Let's go through the details. Suppose that we have fixed the angle $\alpha$. For small $\delta$, let $Q_\delta = (-\delta, \delta)^2 \subset \mathbb{R}^2$ and consider the affine charts $\xi_{\mathbf{p}}, \xi_{\mathbf{q}} : Q_\delta \to \mathbb{T}^2$ given by $\xi_{\mathbf{p}}(\mathbf{x}) = I(P\mathbf{x}) + \mathbf{p}$ and $\xi_{\mathbf{q}}= I(P \mathbf{x}) + \mathbf{q}$ where \[ P= \left( \begin{matrix} 1 & 0 \\ \alpha & 1 \end{matrix} \right),\] and $I: \mathbb{R}^2 \to \mathbb{T}^2$ is the canonical projection. Let \[\operatorname{Box}_\delta(\mathbf{p}) = \xi_\mathbf{p} (Q_\delta)\qquad \text{and}\qquad \operatorname{Box}_\delta(\mathbf{q}) = \xi_\mathbf{q} (Q_\delta).\] We will refer to $\operatorname{Box}_\delta(\mathbf{p})$ and $\operatorname{Box}_\delta(\mathbf{q})$ as \emph{flow boxes} around $\mathbf{p}$ and $\mathbf{q}$. For any $x\in\mathbb{T} = \mathbb{R}/\mathbb{Z}$, we set $\tilde x$ a lift of $x$ to $\mathbb{R}$ and define \[ \|x\| = \min_{n\in\mathbb{Z}} |\tilde x-n|.\] Let also \[S_\mathbf{p} : (-\delta, \delta) \to (0,\infty]\] be the time it takes for the flow to cross $\operatorname{Box}_\delta(\mathbf{p})$, defined by \[S_\mathbf{p}(y) = \min \{ t>0: \phi^t(\mathbf{p} - \delta (1, \alpha)+(0,y)) \notin \operatorname{Box}_\delta(\mathbf{p})\}.\] \begin{lemma} \label{time in a box} Let $X = \varphi X_0$ be a reparameterized linear flow satisfying (SH) with a stopping point at $\mathbf{p}$ and $0 < \delta < 1$ small enough so that $X$ has no other stopping point in $\operatorname{Box}_\delta(\mathbf{p})$. Then $y\mapsto S_\mathbf{p}(y) - \frac{\pi}{\sqrt{d_\mathbf{q}} | y |}$ is integrable. \end{lemma} \begin{proof} Let $Q= (-\delta, \delta)^2$ and $\xi_\mathbf{p} : Q \to \operatorname{Box}_\delta(\mathbf{p})$ be the affine chart described in the definition of $\operatorname{Box}_\delta(\mathbf{p})$. Let $\tilde{X} = \xi_{\mathbf{p}}^* X$ be the pull-back of $X$ through $\xi_\mathbf{p}$ and denote by $\tilde{\phi}^t$ its associated flow on $Q$. Note that $\tilde{X}$ is a horizontal vector field and that the time it takes for a trajectory of $\tilde{\phi}^t$ to cross $Q$ is the same as the time it takes for a trajectory of $\phi^t$ to cross $\operatorname{Box}_\delta(\mathbf{p})$. We claim that $\tilde{X}$ is of the form $\tilde{\varphi} e_1$, where $e_1$ is the unit vector $(1,0)$ and $\tilde{\varphi}$ satisfies \begin{equation}\label{equal Hessians} \det D^2 \tilde{\varphi} ( \mathbf{0}) = \det D^2 \varphi(\mathbf{p}) = d_\mathbf{p}. \end{equation} Once this is shown, the proof follows from Lemma \ref{intble diff lemma}. To see why (\ref{equal Hessians}) holds, note that \[\tilde{X} = \xi_{\mathbf{p}}^* X (\mathbf{x}) = D\xi_\mathbf{p}^{-1}(\xi_{\mathbf{p}}(\mathbf{x})) X(\xi_\mathbf{p}(\mathbf{x})) = P^{-1} X (\xi_\mathbf{p}(\mathbf{x})) = \tilde{\varphi}_{\mathbf{p}} (\mathbf{x}) e_1,\] where $\tilde{\varphi}_{\mathbf{p}} = \varphi \circ \xi_{\mathbf{p}}$, and therefore \[D^2 \tilde{\varphi}_{\mathbf{p}}(\mathbf{0}) = D\xi_{\mathbf{p}}(\mathbf{0}) ^T D^2 \varphi (\mathbf{p}) D \xi_{\mathbf{p}}(\mathbf{0}) = P^T D^2 \varphi(\mathbf{p}) P. \] Since $\det P = 1$ we have $\det D^2 \tilde{\varphi}_{\mathbf{p}}(\mathbf{0}) = \det D^2 \varphi ( \mathbf{p}) =d_\mathbf{p}$. \end{proof} We are now able to provide a rather nice description of the return time of the flow $X$ to a transverse cross section. \begin{proposition}\label{return times} Let $X = \varphi X_0$ be a reparameterized linear flow satisfying (SH) with stopping points at $\mathbf{p}$ and $\mathbf{q}$, and $\Sigma = \{x_0\} \times \mathbb{T}$ a cross section not containing $\mathbf{p}$ nor $\mathbf{q}$. Let \begin{align*} T: \mathbb{T} & \longrightarrow (0,\infty] \\ y & \longmapsto \min\{ t>0: \phi^t(x_0,y) \in \Sigma \} \end{align*} be the return time of the flow to $\Sigma$. Denote by $d_\mathbf{p}$ and $d_\mathbf{q}$ the determinants of the Hessian of $\varphi$ at $\mathbf{p}$ and $\mathbf{q}$ respectively. Then \[T(y) = \frac{\pi }{\sqrt{d_\mathbf{p}} \|y-p_0\|} + \frac{\pi }{\sqrt{d_\mathbf{q}} \| y-q_0\|} + \sigma(y) \] for some integrable function $\sigma: \mathbb{T} \to \mathbb{R}$. \end{proposition} Recall that $p_0$ is the unique point in $\mathbb{T}$ such that $\mathbf{p} = (x_0, y) + r (1, \alpha)$ for some $0< r< 1$. Similarly for $q_0$. Note that Proposition \ref{return times} does not require $p_0$, $q_0$ do be distinct. \begin{proof} Fix $\delta>0$ so that the sets $\Sigma$, $\operatorname{Box}_\delta(\mathbf{p})$ and $\operatorname{Box}_\delta(\mathbf{q})$ are pairwise disjoint. Apply Lemma \ref{time in a box} and observe that the time spent by the orbit of $(x_0,y)$ outside $\operatorname{Box}_\delta(\mathbf{p}) \cup \operatorname{Box}_\delta(\mathbf{q})$ before it hits $\Sigma$ is a bounded function. \end{proof} Proposition \ref{return times} shows that the behaviour of time averages for the flow $\phi^t$ can be thought of as a problem of infinite ergodic theory. As we shall see in the next section, the behaviour of the time averages of the flow is determined by the behaviour of the quotient \[ \sum_{k=0}^{n-1} \frac{1}{\|x+k\alpha - p_0 \|} \Big/ \sum_{k=0}^{n-1} \frac{1}{\|x+k\alpha - q_0 \|}, \] for typical $x$, as $n \to \infty$. \section{Invariant measures}\label{SecInv} \subsection{A $\sigma$-finite invariant measure} \begin{figure}\label{simul} \noindent \includegraphics[width=.33\linewidth]{PointFlowAccuBig_100000.png}\hfill \includegraphics[width=.33\linewidth]{PointFlowAccuBig_1000000.png}\hfill \includegraphics[width=.33\linewidth]{PointFlowAccuBig_10000000.png} \caption{Simulations of the time-1 of the flow $\phi^t$ for simulation times $T=10^5$ (left), $10^6$ (middle) and $10^7$ (right). More precisely, these are simulations of a single orbit starting at point $M=(0.1,\,0.3)$ of the map $M\mapsto M+\delta\varphi(M)(1,\alpha)$, with $\delta=0.1972348$ and $\alpha=0.764831$, and $\varphi(M) = \min(\|M-\mathbf{p}\|_2, \|M-\mathbf{q}\|_2)$ for $\mathbf{p}=(0.25,0.75)$ and $\mathbf{q}=(0.75,0.25)$ (blue dots). Note that some strips can be observed on these simulations (they are more visible for $T=10^5$), which correspond to close returns to the initial conditions of the rotation of angle $\alpha$.} \end{figure} Let $\phi^t$ be a reparameterized linear flow satisfying (SH). Consider the special flow $\Psi^t$ on the domain $D$ as described in Section \ref{special flows}. Let $m$ denote the restriction of the Lebesgue measure on $\mathbb{R}^2$ to $D$. It is straightforward to check that $m$ is $\Psi^t$-invariant for every $t$. It follows that $\mu = \Xi_* m$ is invariant under $\phi^t$ and absolutely continuous with respect to the Haar measure $\la^2$ on $\mathbb{T}^2$. However --- and here's the catch --- due to the non-integrability of the roof function (Proposition~\ref{return times}), the measure $\mu$ is not a finite measure (although it is clearly $\sigma$-finite). \begin{remark} Instead of looking at $\mathbb{T}^2$ we could consider reparameterization of a minimal linear flow on $\mathbb{T}^n$ for $n \geq 3$ with two stopping points. In this case, we would obtain a special flow over $\mathbb{T}^{n-1}$ whose roof function has, again, two asymptotics of order $\|x\|^{-1}$. However, for $n-1 \geq 2$, such a function is integrable. It therefore follows that there is an invariant probability $\mu$ absolutely continuous with respect to the Haar measure on $\mathbb{T}^n$. In particular, there cannot be an extreme historic behaviour in this setting. Unless, of course, the order of the zeros of the speed function at the stopping points is higher than quadratic. \end{remark} \subsection{Limit measures} Let $\mathcal{M}_X$ denote the set of invariant probability measures for $\phi^t$. The following proposition is a special case of a more general result by Saghin-Sun-Vargas \cite[Proposition 1]{MR2670926}. \begin{proposition}\label{invariant probs} $\mathcal{M}_X = \{ \alpha \delta_{\mathbf{p}} + (1-\alpha) \delta_{\mathbf{q}}: 0 \leq \alpha \leq 1 \}$. \end{proposition} We remark that $\phi^t$ is an example of a flow for which any point is non-wandering (and even, is in the closure of the set of recurrent points) but the union of the supports of the invariant measures is finite. \medskip Given measures $\mu, \nu \in \mathcal{M}_X$, we use the notation $[\mu, \nu]$ to denote the set $\{ \alpha \mu + (1-\alpha) \nu: \ 0 \leq \alpha \leq 1 \}$. Thus Proposition~\ref{invariant probs} can be written as $\mathcal{M}_X = [\delta_\mathbf{p}, \delta_\mathbf{q}]$. The following proposition, for its part, says that the limit measures are almost everywhere constant. \begin{proposition}\label{invariant probs better} For any reparametrized linear flow (with zero Lebesgue measure set of singularities), the set $p\omega(\mathbf{x})$ is almost everywhere constant. \end{proposition} \begin{proof} Denote by $\mathcal{K}(\mathcal{M}_X)$ the set of compact subsets of $\mathcal{M}_X$, endowed with a distance generating the Hausdorff topology. For every $n>0$, the set $\mathcal{K}(\mathcal{M}_X)$ is covered by a finite number of balls $B(K_i^n,1/n)$. For any $n,i$, the set \[\big\{\mathbf{x}\in \mathbb{T}^2 : p\omega(\mathbf{x})\in B(K_i^n,1/n)\big\}\] is $\phi^t$-invariant (and the union of these sets over $i$ is of full measure). Hence, by ergodicity, at least one of these sets is of measure 1: for any $n$, there exists $i$ such that $p\omega(\mathbf{x})\in B(K_i^n,1/n)$ for a.e. $\mathbf{x}\in \mathbb{T}^2$. This implies that $p\omega(\mathbf{x})$ is a.e. constant. \end{proof} \subsection{Computing limit measures in Diophantine terms}\label{SecLimitDioph} In this section we link the ergodic behaviour of the flow $\phi^t$ with some limit behaviour of Birkhoff sums over the rotation $R_\alpha$. More precisely, we show how the presence of an extreme historic behaviour can be reduced to a Diophantine problem of comparing sums of reciprocals. First we develop a general criterion for the existence of an extreme historic behaviour (Proposition~\ref{criterium1}). For $x\in\mathbb{T}$, let \begin{equation}\label{DefSn} S_k(x) = \sum_{i=0}^{k-1} \frac{1}{\|x + i \alpha \|} \end{equation} and \begin{equation}\label{DefTheta} \Theta_k^\beta(x) = \frac{S_k(x)}{S_k(x-\beta)}. \end{equation} Note that for any $\mathbf{x}\in\mathbb{T}^2$, by continuity of $t\mapsto \mu_\mathbf{x}^t$, the limit set $p\omega(\mathbf{x})$ is connected. By combining Propositions \ref{invariant probs} and \ref{invariant probs better}, there exist $0\le \tau_0 \le \tau_1 \le 1$ such that \begin{equation}\label{crit1} p\omega(\mathbf{x}) = \Big[\tau_0\delta_\mathbf{p} + (1-\tau_0)\delta_\mathbf{q},\ \tau_1\delta_\mathbf{p} + (1-\tau_1)\delta_\mathbf{q}\Big] \quad \boldsymbol{\lambda}-a.e. \end{equation} \begin{proposition} \label{criterium1} Let $\phi^t$ be as in (SH) and $\Sigma$ chosen so that $p_0 \neq q_0$. Let $\beta = q_0-p_0$ and $0\le \tau_0 \le \tau_1 \le 1$ such that \eqref{crit1} holds. Suppose that the positive orbit of $\mathbf{x}$ does not meet neither $\mathbf{p}$ nor $\mathbf{q}$. Then \begin{align*} \limsup_{n \to \infty} \Theta_n^\beta(x) & = \sqrt{\frac{d_\mathbf{p}}{d_\mathbf{q}}}\left(\frac{\tau_1}{1-\tau_1} \right) \quad \boldsymbol{\lambda}-a.e.,\text{ and}\\ \liminf_{n \to \infty} \Theta_n^\beta(x) & = \sqrt{\frac{d_\mathbf{p}}{d_\mathbf{q}}}\left(\frac{\tau_0}{1-\tau_0} \right) \quad \boldsymbol{\lambda}-a.e. \end{align*} \end{proposition} We begin by establishing an auxiliary property regarding accumulation points of $\mu_\mathbf{x}^t$. This will shorten the (a little lengthy but straightforward) proof of Proposition~\ref{criterium1}. \begin{lemma}\label{equivalent accumulation} Let $\phi^t$ be a reparameterized linear flow satisfying (SH) and $\mu_\mathbf{x}^t$ its associated family of empirical measures. Let $r>0$ be small enough so that $\operatorname{Box}_r (\mathbf{p})$ and $\operatorname{Box}_r(\mathbf{q})$ are disjoint. Then, given any $\mathbf{x} \in \mathbb{T}^2$, the following are equivalent: \begin{enumerate} \item \[\tau \delta_\mathbf{p} + (1-\tau)\delta_\mathbf{q} \in p\omega(\mathbf{x}); \] \item \[ \liminf_{t \to \infty} \mu_\mathbf{x}^t(\operatorname{Box}_r(\mathbf{p})) \leq \tau \leq \limsup_{t \to \infty} \mu_\mathbf{x}^t(\operatorname{Box}_r(\mathbf{p})); \] \item \[ \liminf_{t \to \infty} \mu_\mathbf{x}^t(\operatorname{Box}_r(\mathbf{q})) \leq 1-\tau \leq \limsup_{t \to \infty} \mu_\mathbf{x}^t(\operatorname{Box}_r(\mathbf{q})). \] \end{enumerate} \end{lemma} \begin{proof} We start by recalling a useful characterization of weak* convergence: a sequence of measures $\mu_n$ converges weakly* to $\mu$ if and only if $\mu_n(U) \to \mu(U)$ for every open set $U$ satisfying $\mu(\partial U) = 0$. Suppose that (1) holds. Then we may choose a sequence $t_n \to \infty$ such that \[\mu_\mathbf{x}^{t_n} \to \tau \delta_\mathbf{p} + (1-\tau) \delta_\mathbf{q}.\] Since $r$ is small, the boundary of both flow boxes $\operatorname{Box}_r(\mathbf{p})$ and $\operatorname{Box}_r(\mathbf{q})$ have zero $\tau \delta_\mathbf{p} + (1-\tau) \delta_\mathbf{q}$ measure. Hence \[\lim_{n \to \infty}\mu_\mathbf{x}^{t_n}(\operatorname{Box}_r(\mathbf{p})) = \tau \qquad \text{and} \qquad \lim_{n \to \infty} \mu_\mathbf{x}^{t_n}(\operatorname{Box}_r(\mathbf{q})) = 1-\tau.\] It follows that \begin{equation*}\label{liminfp} \liminf_{t \to \infty} \mu_\mathbf{x}^t(\operatorname{Box}_r(\mathbf{p})) \leq \tau \qquad \text{and} \qquad \liminf_{t \to \infty} \mu_\mathbf{x}^t (\operatorname{Box}_r(\mathbf{q})) \leq 1-\tau. \end{equation*} and similarly \[\limsup_{t \to \infty} \mu_\mathbf{x}^t (\operatorname{Box}_r(\mathbf{p})) \geq \tau \qquad \text{and} \qquad \limsup_{t \to \infty} \mu_\mathbf{x}^t (\operatorname{Box}_r(\mathbf{q})) \geq 1-\tau.\] We have shown that (1) implies both (2) and (3). We shall now show that (2) implies (1). The proof that (3) implies (1) is analogous. Suppose that (2) holds. By continuity of the map \[t \mapsto \mu_\mathbf{x}^t (\operatorname{Box}_r(\mathbf{p}))\] it is possible to find a sequence $t_n \to \infty$ such that \[\lim_{n \to \infty} \mu_\mathbf{x}^{t_n}(\operatorname{Box}_r(\mathbf{p})) = \tau.\] The boundary of $\operatorname{Box}_r(\mathbf{p})$ has zero $\mu$-measure for every $\mu \in p\omega(\mathbf{x})$. Hence any accumulation point $\mu$ of $\mu_\mathbf{x}^{t_n}$ must satisfy \begin{equation} \label{accpoint} \mu(\operatorname{Box}_r(\mathbf{p})) = \tau. \end{equation} By Proposition \ref{invariant probs}, only one measure in $\mathcal{M}_X$ satisfies (\ref{accpoint}), namely $\mu = \tau \delta_\mathbf{p}+(1-\tau) \delta_\mathbf{q}$. Therefore $\mu_x^{t_n} \to \mu$, so $\mu \in p\omega(\mathbf{x})$. \end{proof} \begin{proof}[Proof of Proposition~\ref{criterium1}] Let $r>0$ be small enough so that $\operatorname{Box}_r(\mathbf{p}) \cap \operatorname{Box}_r(\mathbf{q}) = \emptyset$. Upon possibly reducing $r$, we may suppose that the images of $\operatorname{Box}_r(\mathbf{p})$ and $\operatorname{Box}_r(\mathbf{q})$ by the return map on $\Sigma$ are disjoint (because $p_0\neq q_0$). Using Lemma~\ref{equivalent accumulation}, we know that there is a full $\boldsymbol{\lambda}$-measure set $Z \subset \mathbb{T}$ such that, given any $y \in Z$, we have \begin{equation}\label{limsup} \limsup_{t \to \infty} \mu_{(x_0,y)}^t(\operatorname{Box}_r(\mathbf{q})) = 1-\tau_0 \quad \text{and} \quad \liminf_{t \to \infty} \mu_{(x_0,y)}^t(\operatorname{Box}_r(\mathbf{q})) = 1-\tau_1. \end{equation} Note that this property is invariant under the flow, and therefore must hold on a set of full $\la^2$-measure in $\mathbb{T}^2$. Consider the following functions from $\mathbb{T}$ to $\mathbb{R}_+ \cup \{\infty \}$ (see \eqref{EqDefTau}). \begin{align*} T(y) & = \min\{t>0: \phi^t((x_0,y)) \in \Sigma \} \\ S_\mathbf{p} (y) & = \boldsymbol{\lambda} (\{ t \in [0, T(y)): \phi^t((x_0,y)) \in \operatorname{Box}_r(\mathbf{p}) \})\\ S_\mathbf{q} (y) & = \boldsymbol{\lambda} (\{ t \in [0, T(y)): \phi^t((x_0,y)) \in \operatorname{Box}_r(\mathbf{q}) \}) \\ O(y) & = \boldsymbol{\lambda} (\{t \in [0,T(y)): \phi^t((x_0,y)) \notin \operatorname{Box}_r(\mathbf{p}) \cup \operatorname{Box}_r(\mathbf{q}) \}) \\ A(y) & = \frac{\pi}{\sqrt{d_\mathbf{p}} \|y -p_0\| } \\ B(y) & = \frac{\pi}{\sqrt{d_\mathbf{q}} \|y -q_0 \| }. \end{align*} (We set the value of these functions to $\infty$ whenever their defining expressions are not well defined.) Note that, since $\operatorname{Box}_r(\mathbf{p})$ and $\operatorname{Box}_r(\mathbf{q})$ are disjoint, we have \[S_\mathbf{p} + S_\mathbf{q} + O = T.\] \medskip We know from Lemma~\ref{time in a box} that there are functions $\sigma_\mathbf{p}, \sigma_\mathbf{q} \in L^1(\mathbb{T})$ such that \[S_\mathbf{p} = A+ \sigma_\mathbf{p} \qquad \text{and} \qquad S_\mathbf{q} = B + \sigma_\mathbf{q}.\] Writing \[C = O + \sigma_\mathbf{p} + \sigma_\mathbf{q} \] and using the notation \begin{align} A_n = \sum_{k=0}^{n-1} A\circ R_\alpha^k, \quad B_n = \sum_{k=0}^{n-1} B\circ R_\alpha^k, \quad C_n = \sum_{k=0}^{n-1} C\circ R_\alpha^k, \quad T_n = \sum_{k=0}^{n-1} T\circ R_\alpha^k, \end{align} we get \[ A_n + B_n + C_n = T_n. \] We remark that by the fact that the images of $\operatorname{Box}_r(\mathbf{p})$ and $\operatorname{Box}_r(\mathbf{q})$ by the return map on $\Sigma$ are disjoint, the property \eqref{limsup} can be replaced by \begin{equation}\label{limsup2} \limsup_{n \to \infty} \mu_{(x_0,y)}^{T_n(y)}(\operatorname{Box}_r(\mathbf{q})) = 1-\tau_0 \quad \text{and} \quad \liminf_{n \to \infty} \mu_{(x_0,y)}^{T_n(y)}(\operatorname{Box}_r(\mathbf{q})) = 1-\tau_1. \end{equation} for $\boldsymbol{\lambda}$ almost every $y\in Y$. \medskip Note that \begin{align*} \mu_{(x_0,y)}^{T_n(y)}\big(\operatorname{Box}_r(\mathbf{q})\big) & = \frac{\sum_{k=0}^{n-1}S_\mathbf{q}(R_\alpha^k(y))}{T_n(y)}\\ & = \frac{\sum_{k=0}^{n-1}\sigma_\mathbf{q}(R_\alpha^k(y))}{T_n(y)} + \frac{B_n(y)}{T_n(y)}\\ & = \frac{\sum_{k=0}^{n-1}\sigma_\mathbf{q}(R_\alpha^k(y))}{T_n(y)} + \frac{1}{1+\frac{A_n(y)}{B_n(y)}+\frac{C_n(y)}{B_n(y)}}. \end{align*} Recall that $\sigma_\mathbf{q}$ and $C$ are integrable functions whereas $B$ and $T$ are not. Hence \[\frac{\sum_{k=0}^{n-1}\sigma_\mathbf{q}(R_\alpha^k(z))}{T_n(z)} \to 0,\qquad \frac{C_n(z)}{B_n(z)} \to 0 \qquad \boldsymbol{\lambda}-a.e.\] (because by ergodicity, $C_n(z)/n \to \int C$ almost everywhere, while $B_n(z)/n$ tends to $+\infty$ almost everywhere). Consequently, as by \eqref{limsup2} \[\limsup_{n \to \infty} \mu_{(x_0,y)}^{T_n(y)} \big(\operatorname{Box}_r(\mathbf{q})\big) = 1-\tau_0 \quad \boldsymbol{\lambda}-a.e.,\] one has \[\liminf_{n\to \infty} \Theta_n^\beta (y-p_0) = \liminf_{n \to \infty} \sqrt{\frac{d_\mathbf{p}}{d_\mathbf{q}}}\frac{A_n(y)}{B_n(y)} = \sqrt{\frac{d_\mathbf{p}}{d_\mathbf{q}}}\left(\frac{\tau_0}{1-\tau_0} \right) \quad \boldsymbol{\lambda}-a.e.\] A similar argument holds for $\liminf_{n \to \infty} \Theta_n^\beta$. \end{proof} \subsection{Consequences of a symmetry property of $\Theta_n^\beta$}\label{SubsecSym} We shall see that the functions $\Theta_n^\beta$ have a nice symmetry property. It will imply that there is only one possibility for physical measures (Proposition~\ref{PropPossibOmega}), and give more easily checkable criteria for the existence of an extreme historic behaviour (Propositions \ref{criterium2} and \ref{criterium3}) than Proposition~\ref{criterium1}. Denote by $I : \mathbb{T} \to \mathbb{T}$ the involution map $x \mapsto -x$ and let \[J_n^\beta = R_{\beta - (n-1) \alpha} \circ I.\] Note that $J_n^\beta$ can also be written as $I \circ R_{(n-1)\alpha-\beta}$. \begin{lemma}\label{symmetry} We have \[\Theta_n^\beta \circ J_n^\beta = \frac{1}{\Theta_n^\beta}.\] \end{lemma} \begin{proof} Direct calculation. \end{proof} An important consequence of Lemma~\ref{symmetry} is that that it gives us only one possible candidate for physical measure. \begin{proposition}\label{PropPossibOmega} If $\phi^t$ has a physical measure, then it is equal to $\mu_\infty$ (defined in \eqref{EqFormPhys}). \end{proposition} Hence, either $\phi^t$ has an historic behaviour, or it admits this measure as a physical measure with full basin. \begin{proof} We already know that if $\phi^t$ has a physical measure, then this one is unique (it is a consequence of ergodicity, see Proposition~\ref{invariant probs better}). Suppose then that \[\mu_\infty = \tau \delta_\mathbf{p} + (1-\tau) \delta_\mathbf{q}\] is a physical measure for $\phi^t$. Then, according to Proposition~\ref{criterium1} we must have \begin{equation}\label{physmeas} \lim_{n \to \infty} \Theta_n^\beta(x) = \sqrt{\frac{d_\mathbf{p}}{d_\mathbf{q}}}\left( \frac{\tau}{1-\tau} \right) \end{equation} for $\boldsymbol{\lambda}$-a.e. $x\in\mathbb{T}$. Let $T$ be the right hand side of (\ref{physmeas}). We claim that $T = 1$. Indeed, suppose that $T>1$. Then there exists some $N\in\mathbb{N}$ such that \begin{equation}\label{EqPossib} \boldsymbol{\lambda} \big\{x\in\mathbb{T} : \Theta_n^\beta(x)>1\big\} > \frac{1}{2} \end{equation} for every $n \geq N$. Hence, according to Lemma~\ref{symmetry}, we have \[\boldsymbol{\lambda} \big\{x\in\mathbb{T} : \Theta_n^\beta \circ J_n^\beta <1 \big\} > \frac{1}{2}\] for every $n \geq N$. But this is not possible since $J_n^\beta$ preserves $\boldsymbol{\lambda}$. Hence $T \leq 1$. A similar argument shows that $T \geq 1$. Solving for $\tau$ in the equation \[\sqrt{\frac{d_\mathbf{p}}{d_\mathbf{q}}}\left( \frac{\tau}{1-\tau} \right) = 1\] gives \[\tau = \frac{\sqrt{d_\mathbf{q}}}{\sqrt{d_\mathbf{p}}+\sqrt{d_\mathbf{q}}}.\] \end{proof} One could expect Lemma~\ref{symmetry} to imply the $\boldsymbol{\lambda}$-almost everywhere symmetry of the set $p\omega(\mathbf{x})$. This is not true in its full generality (see Theorem~\ref{PropDivSum}). The reason why it is not stems from an insufficiency of information about the behaviour of $\mu_\mathbf{x}^t$ by considering only the sequence of return times to $\Sigma$. Lemma~\ref{symmetry} also provides us with two simple criteria for the presence of extreme historic behaviour, under some uniformity hypotheses. \begin{proposition}\label{criterium2} Let $\phi^t$ be as in (SH). Suppose that there exists $C>0$ such that, given any $K>1$, one can find $n\in\mathbb{N}$ such that \begin{equation*}\label{crit2} \boldsymbol{\lambda} \big\{x \in \mathbb{T}: \Theta_n^\beta(x)>K \big\} \geq C. \end{equation*} Then $\phi^t$ has an extreme historic behaviour. \end{proposition} \begin{proposition}\label{criterium3} Let $\phi^t$ be as in (SH). Suppose that, given any $K>1$, there exists $C>0$ such that \begin{equation*}\label{crit3} \boldsymbol{\lambda}\big\{x \in \mathbb{T}: \Theta_n^\beta(x)>K \big\} \geq C \end{equation*} for infinitely many $n$. Then $\phi^t$ has an extreme historic behaviour. \end{proposition} \begin{proof}[Proof of Proposition~\ref{criterium2}] According to Proposition~\ref{criterium1} it suffices to show that \begin{equation} \label{limsupinfinfty} \limsup_{n \to \infty} \Theta_n^\beta(x) = \infty \qquad \text{and}\qquad \liminf_{n \to \infty} \Theta_n^\beta(x) = 0 \end{equation} for $\boldsymbol{\lambda}$-almost every $x$. It is straightforward to check that the set on which \eqref{limsupinfinfty} holds is $R_\alpha$-invariant. Thus to show that $\phi^t$ has an extreme historic behaviour, it suffices to show that this set has positive $\boldsymbol{\lambda}$-measure. Let \[A_{K,n} = \big\{ x \in \mathbb{T} : \Theta_n^\beta(x) >K\big\},\quad A_K = \bigcup_{n \in \mathbb{N}} A_{K,n}, \quad A = \bigcap_{K>1} A_K\] and \[B_{K,n} = \big\{x \in \mathbb{T}: \Theta_n^\beta(x) < 1/K \big\},\quad B_K = \bigcup_{n \in \mathbb{N}} B_{K,n}, \quad B = \bigcap_{K>1} B_K.\] Note that $\limsup_{n\to \infty} \Theta_n^\beta(x) = \infty$ if and only if $x \in A$, and that $\liminf_{n \to \infty} \Theta_n^\beta(x) =0$ if and only if $x \in B$. The hypothesis in Proposition~\ref{criterium2} implies that $\boldsymbol{\lambda}(A_K) \geq C$ for every $K>1$. Also, it follows from Lemma~\ref{symmetry} that \begin{align*} \boldsymbol{\lambda}(B_{K,n}) & = \boldsymbol{\lambda} \big\{x \in \mathbb{T}: \Theta_n^\beta(x) < 1/K \big\} \\ & = \boldsymbol{\lambda} \big\{ x \in \mathbb{T} : \Theta_n^\beta \circ J_n^\beta (x) > K \big\} \\ & = \boldsymbol{\lambda}(A_{K,n}) \geq C. \end{align*} It therefore follows from our hypothesis that $\boldsymbol{\lambda}(B_{K}) \geq C$. Note that $A_K$ is a decreasing family in the sense that $A_{K'} \subset A_K$ whenever $K' \geq K$. It follows that $\boldsymbol{\lambda}(A) \geq C$. The proof that $\boldsymbol{\lambda}(B)\geq C$ is analogous. \end{proof} \begin{proof}[Proof of Proposition~\ref{criterium3}] In view of Proposition~\ref{criterium1} it suffices to show that \begin{equation}\label{limsupinfty} \limsup_{n \to \infty} \Theta_n^\beta(x) = \infty \end{equation} and \begin{equation}\label{liminfzero} \liminf_{n \to \infty} \Theta_n^\beta(x) = 0 \end{equation} hold for $\boldsymbol{\lambda}$-almost every $x$. The hypothesis implies that, given any $K>1$, there exists $C>0$ such that \[ \boldsymbol{\lambda} \left( \bigcup_{k \geq n} \big\{x : \Theta_k^\beta(x) > K \big\} \right) \geq C\] for every $n \geq 1$. Note that the sequence $\bigcup_{k \geq n} \{x : \Theta_k^\beta(x) > K \}$ is decreasing in $n$ so that \[\boldsymbol{\lambda} \left( \bigcap_{n \geq 1} \bigcup_{k\geq n} \big\{x: \Theta_k^\beta(x)>K\big\}\right) \geq C. \] In particular, the set \[A_K=\bigcup_{\epsilon>0} \bigcap_{n \geq 1} \bigcup_{k\geq n} \big\{x: \Theta_k^\beta (x) > K + \epsilon \big\} \] has positive $\boldsymbol{\lambda}$-measure for every $K>1$ (the term $\epsilon$ is added to ensure the invariance of the set $A_K$). Analogously, using Lemma~\ref{symmetry}, one sees that \[ B_K = \bigcup_{\epsilon>0} \bigcap_{n \geq 1} \bigcup_{k\geq n} \left\{x: \Theta_k^\beta (x) < \frac{1}{K+\epsilon} \right\} \] has positive $\boldsymbol{\lambda}$-measure for every $K>1$. The sets $A_K$ and $B_K$ are $R_\alpha$-invariant. Indeed, $A_K$ is the set of points on which $\limsup_n \Theta_n^\beta $ is larger than $K$ and $B_K$ is the set on which $\liminf_n \Theta_n^\beta $ is smaller than $1/K$, and it is straightforward to check that $\Theta_n^\beta(x) \sim_{n} \Theta_n^\beta(x+\alpha)$ for every $x$. By ergodicity of $R_\alpha$ we conclude that $A_K$ and $B_K$ have full $\boldsymbol{\lambda}$-measure. Note that $A_K$ and $B_K$ are decreasing families. Thus \[A=\bigcap_{K>1} A_K\qquad \text{and}\qquad B=\bigcap_{K>1} B_K\] are also of full $\boldsymbol{\lambda}$-measure. But $A$ and $B$ are the sets on which (\ref{limsupinfty}) and (\ref{liminfzero}) hold, respectively. The proof is therefore complete. \end{proof} \section{Circle rotations} \label{PartRotations} \subsection{Diophantine approximation theory} Recall that for any $x\in\mathbb{T} = \mathbb{R}/\mathbb{Z}$ we define its norm as \[ \|x\| = \min_{m\in\mathbb{Z}} |\tilde x-m|,\] where $\tilde x$ is a lift of $x$ to $\mathbb{R}$. Let $\alpha>0$ be an irrational number, and let $R_\alpha: \mathbb{T} \to \mathbb{T}$ be its associated circle rotation. We write $\alpha = [a_0;a_1,a_2\cdots]$ for its expansion as a continued fraction. We denote by $p_n/q_n = [a_0;a_1,a_2,\cdots,a_n]$ and $\alpha_n$ such that \[\alpha = [a_0;a_1,\cdots, a_{n-1},\alpha_n].\] The sequence $q_n$ is characterized by the properties \begin{itemize} \item $q_0 = 1$, and \item $q_n = \min \{ k>q_{n-1}: \| k \alpha \| < \|q_{n-1} \alpha \| \} $ for every $n \geq 1$. \end{itemize} We also set $\rho_n = q_n \alpha - p_n$ and $\lambda^{(n)} = |\rho_n|$. For $k \in \mathbb{N}$, let $\mathcal{O}(k)$ denote the orbit $\{R_\alpha^i(0): 0 \leq i \leq k-1 \}$, and \begin{align*} m(\mathcal{O}(k)) & = \min_{x \in \mathcal{O}(k)} \min_{y \in \mathcal{O}(k) \setminus{x}} \|x-y\|, \text{ and}\\ M(\mathcal{O}(k)) & = \max_{x \in \mathcal{O}(k)} \min_{y \in \mathcal{O}(k) \setminus{x}} \| x-y \| \end{align*} be the smallest resp. largest distance between two consecutive points of the orbit $\mathcal{O}(k)$ on $\mathbb{T}$ (``gaps''). The following lemma recalls classical facts of Diophantine approximation theory, that will be used in the sequel (some of them can be deduced from renormalization properties, see Figure \ref{FigRenor0}). \begin{lemma}\label{properties} \begin{equation}\label{EqContFrac0} \lambda^{(n)} = (-1)^n \rho_n = \min_{0\le j< q_{n+1}}\|j\alpha\| ; \end{equation} \begin{equation}\label{EqContFrac1} \frac{\lambda^{(n-2)}}{\lambda^{(n-1)}} = \alpha_n \quad \text{and} \quad a_n = \lfloor \alpha_n \rfloor, \end{equation} \begin{equation}\label{EqTotTime} q_{n+1} = q_{n}a_{n+1} + q_{n-1}; \end{equation} \begin{equation}\label{Eqaeta} \lambda^{(n-1)}= a_{n+1} \lambda^{(n)} + \lambda^{(n+1)}; \end{equation} \begin{equation}\label{EqLambdaQ} q_{n+1} \ge q_n \quad \text{and} \quad\frac{1}{2q_{n+1}} \le \frac{1}{q_{n+1}+q_n} < \lambda^{(n)} < \frac{1}{q_{n+1}}. \end{equation} \begin{align*} m(\mathcal{O}(q_n)) & = \lambda^{(n-1)} > \frac{1}{q_{n-1}+q_n};\\ M(\mathcal{O}(q_n)) & = \lambda^{(n)} + \lambda^{(n-1)} < \frac{1}{q_n} + \frac{1}{q_{n+1}}; \end{align*} given any integer $m$ such that $0<m<q_{n+1}$, we write the Euclidean division $m=\ell q_n+r$, and then \begin{equation}\label{EqOrbit} \mathcal{O}(m) = \left( \bigcup_{i=0}^{\ell-1} R_{\rho_n}^i \mathcal{O}(q_n)\right) \cup R_{\rho_n}^\ell \mathcal{O}(r). \end{equation} \end{lemma} \subsection{Renormalization of rotations}\label{subsecrenor} We now recall some facts about renormalization intervals for circle rotations and their link with continued fractions. This renormalization is at the basis of the ideas of some in the proofs we will present in the sequel and will give nice geometric interpretations of our arguments. We will reuse the notations of Sina\u{\i} and Ulcigrai. \cite[\S 1.1]{MR2478478} (see also Sina\u{\i} \cite[Lecture 9]{MR1258087} and the nice visualizations of Hariss and Arnoux \cite{VideoArnoux}). \begin{figure} \begin{tikzpicture}[scale=1] \fill[fill=green, opacity=.1] (0,0) rectangle (-2.5,1); \fill[fill=blue, opacity=.1] (0,0) rectangle (6,1.5); \fill[fill=green, opacity=.1] (1,0) rectangle (6,1.5); \draw (0,0) -- (0,2); \draw (0,0) -- (6,0); \draw[color=blue!60!black] (0,.5) -- (6,.5); \draw[color=blue!60!black] (0,1) -- (6,1); \draw[color=blue!60!black] (0,1.5) -- (6,1.5); \draw[color=blue!60!black] (6,0) -- (6,1.5); \draw (-2.5,0) -- (0,0); \draw[color=green!60!black] (-2.5,.5) -- (0,.5); \draw[color=green!60!black] (-2.5,1) -- (0,1); \draw[color=green!60!black] (-2.5,0) -- (-2.5,1); \draw[color=green!60!black] (1,0) -- (1,1.5); \draw[color=green!60!black] (3.5,0) -- (3.5,1.5); \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.9cm] (6,0) -- (0,0) node [black,midway,yshift=-0.5cm] {$\Delta^{(n-1)}$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.9cm] (0,0) -- (-2.5,0) node [black,midway,yshift=-0.5cm] {$\Delta^{(n)}$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=.2cm] (1,1.5) -- (6,1.5) node [black,midway,yshift=0.5cm] {$a_{n+1}$ towers}; \draw[<->] (6.2,0) --node[midway, right]{$q_n$} (6.2,1.5); \draw[<->] (-2.7,0) --node[midway, left]{$q_{n-1}$} (-2.7,1); \draw[<->] (-2.5,-.2) --node[midway, below]{$\lambda^{(n)}$} (0,-.2); \draw[<->] (1,-.2) --node[midway, below]{$\lambda^{(n+1)}$} (0,-.2); \draw[<->] (1,-.2) --node[midway, below]{$\lambda^{(n)}$} (3.5,-.2); \draw[<->] (3.5,-.2) --node[midway, below]{$\lambda^{(n)}$} (6,-.2); \end{tikzpicture} \caption{The renormalization interval $\Delta(n-1) = \Delta^{(n)} \cup \Delta^{(n-1)}$ and the associated quantities.}\label{FigRenor0} \end{figure} Let \[\Delta^{(n)} = \left\{\begin{array}{ll} [0,\{q_n\alpha\})\quad & \text{if $n$ is even}\\ {[}\{q_n\alpha\},1)\quad & \text{if $n$ is odd.} \end{array}\right.\] We also denote $\Delta_j^{(n)} = R_\alpha^j (\Delta^{(n)})$. Remark that the length of $\Delta_j^{(n)}$ satisfies $\boldsymbol{\lambda}(\Delta^{(n)}_j) = \lambda^{(n)}$, and that $\lambda^{(n-1)} = \lambda^{(n+1)} + a_{n+1}\lambda^{(n)}$ (it is Equation \eqref{Eqaeta}, which can be observed on Figure \ref{FigRenor0}). For any $n$, the collection made of the intervals $(\Delta_j^{(n)})_{0\le j < q_{n+1}}$ and $(\Delta_j^{(n+1)})_{0\le j < q_n}$ form a partition $\xi^{(n)}$ of $[0,1)$. It is decomposed into two towers \begin{equation}\label{EqTower} Z^{(n)}_l = \bigcup_{j=0}^{q_{n+1}-1} \Delta^{(n)}_j \quad \text{and} \quad Z^{(n)}_s = \bigcup_{j=0}^{q_{n}-1} \Delta^{(n+1)}_j, \end{equation} called respectively the \emph{large} and \emph{small} towers. The interval $\Delta(n) = \Delta^{(n)} \cup \Delta^{(n+1)}$ is called the \emph{$n^\text{th}$ renormalization interval} of the rotation of angle $\alpha$ on $\mathbb{T}$. It can be seen as a subset of $\mathbb{T}$, so that one can define the \emph{induced map} $T^{(n)}$ as the first return map of $R_\alpha$ on $\Delta(n)$. This induced map $T^{(n)}$ is a rotation of angle $\pm\lambda^{(n)}$ (the sign depending of the parity of $n$). Moreover, the return time is constant equal to $q_{n+1}$ on $\Delta^{(n)}$ and constant equal to $q_n$ on $\Delta^{(n+1)}$ (see Figure \ref{FigRenor1}). For $x\in \mathbb{T}$, we denote $x^{(n)}$ the projection of $x$ on $\Delta(n)$. More precisely \begin{itemize} \item if $x\in \Delta^{(n)}_j$ for some $0\le j <q_{n+1}$, then $x^{(n)} = R_\alpha^{-j}(x)$; \item if $x\in \Delta^{(n+1)}_j$ for some $0\le j <q_{n}$, then $x^{(n)} = R_\alpha^{-j}(x)$. \end{itemize} \medskip \begin{figure} \resizebox{\textwidth}{!}{ \begin{tikzpicture}[scale=.8] \fill[fill=green, opacity=.1] (0,0) rectangle (-2.5,1); \fill[fill=blue, opacity=.1] (0,0) rectangle (6,1.5); \fill[fill=green, opacity=.1] (1,0) rectangle (6,1.5); \draw (0,0) -- (0,2); \draw (0,0) -- (6,0); \draw[color=blue!60!black] (0,.5) -- (6,.5); \draw[color=blue!60!black] (0,1) -- (6,1); \draw[color=blue!60!black] (0,1.5) -- (6,1.5); \draw[color=blue!60!black] (6,0) -- (6,1.5); \draw (-2.5,0) -- (0,0); \draw[color=green!60!black] (-2.5,.5) -- (0,.5); \draw[color=green!60!black] (-2.5,1) -- (0,1); \draw[color=green!60!black] (-2.5,0) -- (-2.5,1); \draw[color=green!60!black] (1,0) -- (1,1.5); \draw[color=green!60!black] (3.5,0) -- (3.5,1.5); \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.7cm] (6,0) -- (0,0) node [black,midway,yshift=-0.5cm] {\footnotesize $\Delta^{(n-1)}$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.7cm] (0,0) -- (-2.5,0) node [black,midway,yshift=-0.5cm] {\footnotesize $\Delta^{(n)}$}; \draw[->,>=stealth,rounded corners=5pt,color=green!50!black] (-1.25,1) -- (-1.25,1.3) -- (-2.7,1.3) -- (-2.7,-.5) -- (4.75,-.5) -- (4.75,0); \draw[->,>=stealth,rounded corners=5pt,color=green!50!black] (4.75,1.5) -- (4.75,1.8) -- (3.3,1.8) -- (3.3,-.3) -- (2.25,-.3) -- (2.25,0); \draw[<->] (6.2,0) --node[midway, right]{$q_n$} (6.2,1.5); \draw[<->] (-2.9,0) --node[midway, left]{$q_{n-1}$} (-2.9,1); \fill[fill=green, opacity=.1] (11,0) rectangle (8.5,4); \fill[fill=blue, opacity=.1] (11,0) rectangle (12,1.5); \draw (11,0) -- (11,4.2); \draw (11,0) -- (12,0); \draw[color=blue!60!black] (11,.5) -- (12,.5); \draw[color=blue!60!black] (11,1) -- (12,1); \draw[color=blue!60!black] (11,1.5) -- (12,1.5); \draw[color=blue!60!black] (12,0) -- (12,1.5); \draw (8.5,0) -- (11,0); \draw[color=green!60!black] (8.5,.5) -- (11,.5); \draw[color=green!60!black] (8.5,1) -- (11,1); \draw[color=green!60!black] (8.5,1.5) -- (11,1.5); \draw[color=green!60!black] (8.5,2) -- (11,2); \draw[color=green!60!black] (8.5,2.5) -- (11,2.5); \draw[color=green!60!black] (8.5,3) -- (11,3); \draw[color=green!60!black] (8.5,3.5) -- (11,3.5); \draw[color=green!60!black] (8.5,4) -- (11,4); \draw[color=green!60!black] (8.5,0) -- (8.5,4); \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.5cm] (12,0) -- (11,0) node [black,midway,yshift=-0.5cm] {\footnotesize $\Delta^{(n+1)}$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.5cm] (11,0) -- (8.5,0) node [black,midway,yshift=-0.5cm] {\footnotesize $\Delta^{(n)}$}; \draw[<->] (12.2,0) --node[midway, right]{$q_n$} (12.2,1.5); \draw[<->] (8.3,0) --node[midway, left]{$q_{n+1}$} (8.3,4); \draw [decorate,decoration={brace,amplitude=5pt},xshift=.1cm,yshift=0cm] (11,2.5) -- (11,1) node [black,midway,xshift=0.8cm] {\footnotesize Sector}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=.1cm,yshift=0cm] (11,4) -- (11,2.5) node [black,midway,xshift=0.8cm] {\footnotesize Sector}; \end{tikzpicture} } \caption{Renormalization intervals $\Delta(n-1)$ (left) and $\Delta(n)$ (right) for odd $n$. The green arrows denote the dynamics of the intervals, i.e. the way to build the dynamics of the partition $\xi^{(n)}$ from that of $\xi^{(n-1)}$.}\label{FigRenor1} \end{figure} \begin{figure} \resizebox{\textwidth}{!}{ \begin{tikzpicture} \fill[fill=green, opacity=.1] (0,0) rectangle (-2.5,1); \fill[fill=blue, opacity=.1] (0,0) rectangle (6,1.5); \fill[fill=green, opacity=.1] (1,0) rectangle (6,1.5); \draw (0,0) -- (0,2); \draw (0,0) -- (6,0); \draw[color=blue!60!black] (0,.5) -- (6,.5); \draw[color=blue!60!black] (0,1) -- (6,1); \draw[color=blue!60!black] (0,1.5) -- (6,1.5); \draw[color=blue!60!black] (6,0) -- (6,1.5); \draw (-2.5,0) -- (0,0); \draw[color=green!60!black] (-2.5,.5) -- (0,.5); \draw[color=green!60!black] (-2.5,1) -- (0,1); \draw[color=green!60!black] (-2.5,0) -- (-2.5,1); \draw[color=green!60!black] (1,0) -- (1,1.5); \draw[color=green!60!black] (3.5,0) -- (3.5,1.5); \draw[color=red!70!black, thick] (0,.2) -- (0,-.2); \draw[color=red!70!black, thick] (2.5,.2) -- (2.5,-.2); \draw[color=red!70!black, dashed, thick] (2.5,0) -- (2.5,1.5); \draw[color=red!70!black, thick] (5,.2) -- (5,-.2); \draw[color=red!70!black, dashed, thick] (5,0) -- (5,1.5); \draw[color=red!70!black, thick] (-1,.2) -- (-1,-.2); \draw[color=red!70!black, dashed, thick] (-1,0) -- (-1,1); \draw[->,shorten >=2pt,shorten <=2pt,color=red!70!black] (0,-.2) -- (2.5,-.2); \draw[->,shorten >=2pt,shorten <=2pt,color=red!70!black] (2.5,-.2) -- (5,-.2); \draw[shorten <=2pt,color=red!70!black] (5,-.2) -- (5.8,-.2); \draw[densely dotted,color=red!70!black] (5.8,-.2) -- (6,-.2); \draw[->,shorten >=2pt,color=red!70!black] (-2.3,-.2) -- (-1,-.2); \draw[densely dotted,color=red!70!black] (-2.5,-.2) -- (-2.3,-.2); \draw[->,>=stealth,rounded corners=5pt,color=red!50!black, dotted, thick] (0,-.2) -- (0,-.5) -- (.8,-.5) -- (.8,2) -- (2.5,2) -- (2.5,1.6); \draw[->,>=stealth,rounded corners=5pt,color=red!50!black, dotted, thick] (2.5,-.2) -- (2.5,-.5) -- (3.2,-.5) -- (3.2,2) -- (5,2) -- (5,1.6); \draw[rounded corners=5pt,color=red!50!black, dotted, thick] (5,-.2) -- (5,-.5) -- (5.7,-.5) -- (5.7,2) -- (6,2) ; \draw[->,>=stealth,rounded corners=5pt,color=red!50!black, dotted, thick] (-2.5,2) -- (-1,2) -- (-1,1.1); \draw[color=red!70!black] (1.5,-.5) node{$\lambda^{(n)}$}; \fill[fill=green, opacity=.1] (11,0) rectangle (8.5,4); \fill[fill=blue, opacity=.1] (11,0) rectangle (12,1.5); \draw (11,0) -- (11,4.5); \draw (11,0) -- (12,0); \draw[color=blue!60!black] (11,.5) -- (12,.5); \draw[color=blue!60!black] (11,1) -- (12,1); \draw[color=blue!60!black] (11,1.5) -- (12,1.5); \draw[color=blue!60!black] (12,0) -- (12,1.5); \draw (8.5,0) -- (11,0); \draw[color=green!60!black] (8.5,.5) -- (11,.5); \draw[color=green!60!black] (8.5,1) -- (11,1); \draw[color=green!60!black] (8.5,1.5) -- (11,1.5); \draw[color=green!60!black] (8.5,2) -- (11,2); \draw[color=green!60!black] (8.5,2.5) -- (11,2.5); \draw[color=green!60!black] (8.5,3) -- (11,3); \draw[color=green!60!black] (8.5,3.5) -- (11,3.5); \draw[color=green!60!black] (8.5,4) -- (11,4); \draw[color=green!60!black] (8.5,0) -- (8.5,4); \draw[color=red!70!black, thick] (10,.2) -- (10,-.2); \draw[color=red!70!black, thick] (11,.2) -- (11,-.2); \draw[color=red!70!black, dashed, thick] (10,0) -- (10,4); \draw[color=orange, thick] (9,.2) -- (9,-.2); \draw[color=orange, densely dotted, thick] (9,0) -- (9,4); \draw[color=orange, thick] (11.5,.2) -- (11.5,-.2); \draw[color=orange, densely dotted, thick] (11.5,0) -- (11.5,1.5); \draw[->,shorten >=2pt,shorten <=2pt,color=orange] (11,-.2) -- (10,-.2); \draw[->,shorten >=2pt,shorten <=2pt,color=orange] (10,-.2) -- (9,-.2); \draw[shorten <=2pt,color=orange] (9,-.2) -- (8.7,-.2); \draw[densely dotted,color=orange] (8.7,-.2) -- (8.5,-.2); \draw[densely dotted,color=orange] (12,-.2) -- (11.8,-.2); \draw[->,shorten >=2pt,color=orange] (11.8,-.2) -- (11.5,-.2); \draw[color=orange!80!black] (10.5,-.6) node{$-\lambda^{(n+1)}$}; \end{tikzpicture}} \caption{Set of preimages of 0 by the rotation in time $q_{n+1}$ (red, dashed) and $q_{n+2}$ (orange, dotted).}\label{FigRenor2} \end{figure} The time it takes for the pre-orbit $(R^{-j}_\alpha(0))_{j>0}$ of $0$ to visit every element of the partition $\xi^{(n)}$ is equal to $q_{n+2}$ (see Figure \ref{FigRenor2}). Indeed, it meets first any element $\Delta^{(n)}_j$ of the large tower $Z_l^{(n)}$ (defined by \eqref{EqTower}) exactly $\lfloor\frac{\lambda^{(n)}}{\lambda^{(n+1)}}\rfloor = a_{n+2}$ times, and then any element $\Delta^{(n+1)}_j$ of the small tower $Z_s^{(n)}$ once. The total time is thus equal to (see \eqref{EqTotTime}) \[ q_{n+1}\left\lfloor\frac{\lambda^{(n)}}{\lambda^{(n+1)}}\right\rfloor + q_n = q_{n+1}a_{n+2} + q_n = q_{n+2}. \] This finite pre-orbit is equal to the set of points in the tower $Z_l^{(n+1)}$ above the point of $\Delta^{(n+1)}$ within a distance $\lambda^{(n+2)}$ to 0. \medskip \label{Sectors} To study Birkhoff sums, we will cut them into sums over ``sectors''. The tower $Z^{(n)}_l$ (of height $q_{n+1}$) can be decomposed into a ``basis'' of height $q_{n-1}$, which corresponds to $Z^{(n-1)}_s$, and $a_{n+1}$ groups of floors -- which we will call \emph{sectors} -- of heights $q_n$, made of the floors that project on the same interval of $\Delta^{(n-1)}$ (see Figure \ref{FigRenor1}). \section{Some estimates}\label{SecTech} In this whole section we fix $\alpha\notin \mathbb{Q}$ and use notations of the previous section about continued fractions. For $y\in (0,1)$, we denote \begin{equation}\label{EqDefPsi} \psi_1(y) = \frac{1}{y},\quad \psi_2(y) = \frac{1}{1-y} \quad\text{and}\quad \psi(y) = \max\big(\psi_1(y), \psi_2(y)\big) . \end{equation} Remark that this implies that if $y$ is seen as an element of $\mathbb{T}$, then $\psi(y) = \|y\|^{-1}$, and moreover \[\frac{\psi_1(y)+ \psi_2(y)}{2} \le \psi(y) \le \psi_1(y) + \psi_2(y).\] In the sequel, we will use the notation $\psi(y)$ for $y\in\mathbb{T}$, by identifying the circle $\mathbb{T}$ with $[0,1)$. For $y\in\mathbb{T}$, set (recall that $\psi$ is defined in \eqref{EqDefPsi}) \begin{equation}\label{EqDefS} S(y) = \sum_{i=0}^{q_n-1} \psi\big(R_\alpha^{i}(y)\big) \end{equation} the Birkhoff sum over a sector. \begin{lemma}\label{EqSellFinal} Let $y\in \mathbb{T}$. We denote $y_0$ the point of the orbit $y,R_\alpha(y), \dots, R_\alpha^{q_n-1}(y)$ which is the closest to 0. Then, \[S(y) \ge \psi(y_0) + \frac{\log q_n}{2 \lambda^{(n-1)}},\] and \[S(y) \le \psi(y_0) + \frac{4\log q_n}{\lambda^{(n-1)}}.\] \end{lemma} In the sequel, we will use repeatedly the following trivial fact, obtained by comparison with integral (for the second part, the comparison is done with the logarithmic integral function). \begin{lemma}\label{LemSerHarmo} For any $k_0 \ge 2$, \[\sum_{k=k_0}^N \frac{1}{k} \ge \log\left(\frac{N+1}{k_0}\right) \qquad \text{and} \qquad \sum_{k=1}^N \frac{1}{k} \le \log(3N).\] Moreover, there exists $C> 0 $ such that \[\sum_{k=2}^a \frac{1}{\log k} \leq \frac{C a}{\log a}\] for every integer $a \geq 2$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{EqSellFinal}] Fix any point $y\in\mathbb{T}$, and consider its orbit $\mathcal O = \{y,R_\alpha(y),\dots,$ $ R_\alpha^{q_n-1}(y)\}$ of length $q_n$. We can denote $y_0$ the point of smallest norm of the orbit of $y$ of length $q_n$ and write $\mathcal{O}^*=\mathcal{O} \setminus \{y_0\}$. Note that $\| y \| =\| -y \|$ for every $y \in \mathbb{T}$ so, by swapping from $\mathcal{O}$ to $-\mathcal{O} = \{-y, -R_\alpha(y), \ldots, -R_\alpha^{q_n-1}(y) \}$ if necessary, we suppose that $y_0\in (1/2,1]$. Recall that Lemma~\ref{properties} says that the largest gaps in $\mathcal{O}$ are of size $\lambda^{(n)}+\lambda^{(n-1)}$. So if we write $\mathcal{O}^*=\{y_1, \ldots, y_{q_n-1}\}$ with $0 < y_1 < y_2 < \ldots < y_{q_n-1} < 1$, we have $y_i< i(\lambda^{(n-1)}+\lambda^{(n)}) < 2i\lambda^{(n-1)} $. Hence \begin{align*} \sum_{i=0}^{q_n-1} \psi \big(R_\alpha^i(y)\big) & \geq \psi(y_0) + \sum_{i=1}^{q_n-1} \psi_1(y_i) \\ & \ge \psi(y_0) + \sum_{i=1}^{q_n-1} \psi_1\big(2i(\lambda^{(n-1)})\big). \end{align*} Using Lemma~\ref{LemSerHarmo}, one deduces that \[\sum_{i=0}^{q_n-1} \psi \big(R_\alpha^i(y)\big) \ge \psi(y_0) + \frac{\log(q_n)}{2\lambda^{(n-1)}}.\] \medskip We now turn to the second inequality. Following the same ideas, one gets that \[y_i \ge (i-1)\lambda^{(n-1)} + \frac{\lambda^{(n-1)}}{2} \qquad \text{and} \qquad y_{q_n-j}\le 1-j\lambda^{(n-1)}. \] and hence \[S(y) \le \psi(y_0) + 2\sum_{i=1}^{q_n-1} \frac{2}{i\lambda^{(n-1)}},\] and so, by Lemma~\ref{LemSerHarmo}, \[S(y) \le \psi(y_0) + \frac{4\log q_n}{\lambda^{(n-1)}}.\] \end{proof} As a direct consequence of Lemma~\ref{EqSellFinal}, we have the following result. \begin{corollary} \label{kq orbit} For every $n \geq 1$, every $k \geq 1$ and every $x \in \mathbb{T}$ we have \[\sum_{i=0}^{k q_n-1} \psi\big(R_\alpha^i(x)\big) > \frac{k \log q_n}{2\lambda^{(n-1)}}.\] \end{corollary} \begin{lemma}\label{LemFinalMartin} Let $\alpha = [a_0; a_1, a_2, \ldots]$ be an irrational number. Suppose that $x, y \in \mathbb{T}$ satisfy $\|x-y\| \leq \lambda^{(n)}$ and let $j_0,j_1$ be such that \[\psi_1\big(R_\alpha^{j_0}(x)\big) = \max_{0 \leq j < q_n} \psi_1\big(R_\alpha^j(x)\big) \qquad \text{and} \qquad \psi_1\big(R_\alpha^{j_1}(y)\big) = \max_{0 \leq j < q_n} \psi_1\big(R_\alpha^j(y)\big).\] Then \begin{align*} \bigg| \sum_{j=0}^{q_n-1}\psi_1\big(R_\alpha^j(x)\big) & - \sum_{j=0}^{q_n-1}\psi_1\big(R_\alpha^j(y)\big) \bigg| \\ & \leq \big| \psi_1\big(R_\alpha^{j_0}(x)\big) - \psi_1\big(R_\alpha^{j_0}(y)\big) \big| + \big| \psi_1\big(R_\alpha^{j_1}(x)\big) - \psi_1\big(R_\alpha^{j_1}(y)\big) \big| + \frac{\lambda^{(n)} q_n }{\lambda^{(n-1)}}. \end{align*} \end{lemma} A similar statement holds for $\psi_2$ instead of $\psi_1$. Note that the philosophy of the lemma recalls the cancellations of Sina\u{\i} and Ulcigrai in \cite{MR2478478}. \begin{proof} Let us first show that \begin{equation}\label{half lambda} \psi_1\big(R_\alpha^{j}(x)\big) \leq \frac{1}{\lambda^{(n-1)}} \end{equation} for every $0 \leq j < q_n$ with $j\neq j_0$. Suppose, for the sake of arriving at a contradiction, that we can find some $0 \leq j< q_n$ with $j \neq j_0$ such that $\psi_1(R_\alpha^{j}(x)) > 1/\lambda^{(n-1)}$, and write $x_0 = R_\alpha^{j_0}(x)$ and $x'=R_\alpha^j(x)$. Then, since $\psi_1(x_0) \geq \psi_1(x')$ we must also have $\psi_1(x_0) > 1/\lambda^{(n-1)}$. Hence, $x_0,x'\in(0,\lambda^{(n-1)})$ and so \[\|x_0-x'\| < \lambda^{(n-1)}.\] But this is absurd since $\lambda^{(n-1)}$ is the smallest distance between distinct points in any orbit of length $q_n$ (see \eqref{EqContFrac0}). This proves (\ref{half lambda}). Similarly, \[\psi_1\big(R_\alpha^{j}(y)\big) \leq \frac{1}{\lambda^{(n-1)}}\] for every $0 \leq j < q_n$ with $j\neq j_1$. \medskip Hence, the set \[\big\{R_\alpha^{j}(x): 0 \leq j< q_n, \ j\neq j_0\big\} \cup \big\{R_\alpha^{j}(y): 0 \leq j< q_n, \ j\neq j_1\big\} \] is contained in the set \[X = \mathbb{T} \setminus \left(0,\, \lambda^{(n-1)}\right).\] The function $\psi_1$ has Lipschitz constant $1/\lambda^{(n-1)}$ on $X$, so \begin{align*} \Big|S_{q_n}(x)-S_{q_n}(y) - \left( \psi_1(R_\alpha^{j_0}(x)) - \psi_1(R_\alpha^{j_0}(y)) \right) & - \left( \psi_1(R_\alpha^{j_1}(x)) - \psi_1(R_\alpha^{j_1}(y)) \right) \Big|\\ & = \Big| \sum_{\substack{0\leq j < q_n \\ j \neq j_0,j_1}} \psi_1\big(R_\alpha^j(x)\big)- \psi_1\big(R_\alpha^j(y)\big)\Big| \\ & \leq \sum_{\substack{0\leq j < q_n \\ j \neq j_0,j_1}} \frac{1}{\lambda^{(n-1)}} \big\|R_\alpha^j(x)-R_\alpha^j(y)\big\| \\ & \leq \frac{\lambda^{(n)}(q_n-2)}{\lambda^{(n-1)}}. \end{align*} \end{proof} \subsection*{Using comparison with a rational rotation} \begin{lemma}\label{near rational orbit} Let $\alpha = [a_0; a_1, a_2, \ldots]$ be an irrational number with convergents $p_n/q_n$. Suppose that for some given $n$, $a_{n+1} \geq 2$. Then there exists a bijection \[\sigma: \{0, \ldots, q_n-1\} \to \{0, \ldots, q_n-1 \} \] such that \[\left\| k \alpha - \frac{\sigma(k)}{q_n} \right\| < \lambda^{(n)} < \frac{1}{q_{n+1}}\] for every $ 0 \leq k < q_n$. \end{lemma} \begin{proof} First we shall prove that, given any integer $0 \leq k < q_n$, there exists an integer $0 \leq \ell < q_n$ such that \[\left\| k \alpha - \frac{\ell}{q_n} \right\| < \lambda^{(n)}.\] Indeed, suppose there is some $k$ for which no such integer $\ell$ can be found. Then \[\left|k\alpha - \frac{\ell}{q_n} + m \right| \geq \lambda^{(n)} \] for every $0 \leq \ell < q_n$ and every $m \in \mathbb{Z}$. But then \[ \left|k q_n \alpha - \ell + q_n m \right| \geq q_n \lambda^{(n)}\] for every $0 \leq \ell < q_n$ and every $m \in \mathbb{Z}$. Hence \[\left\|k q_n \alpha \right\| \geq q_n \lambda^{(n)}.\] On the other hand, we have that \[\left\|k q_n \alpha \right\| \leq k \left\|q_n \alpha \right\| = k \lambda^{(n)} \leq (q_n-1) \lambda^{(n)},\] a contradiction. Let $\sigma: \{0, \ldots, q_n-1\} \to \{ 0, \ldots, q_n-1\}$ be such that \[\left\| k\alpha - \frac{\sigma(k)}{q_n} \right\| < \lambda^{(n)} \] for every $0 \leq k < q_n$. We claim that $\sigma$ is injective; hence a bijection. Suppose it is not. Then there exist $0 \leq k_1 < k_2 < q_n$ such that $\sigma(k_1) = \sigma (k_2)$. But then \[\big \|(k_2 - k_1) \alpha \big\| \leq \left\|k_2(\alpha) - \frac{\sigma(k_2)}{q_n} \right\| + \left\|\frac{\sigma(k_1)}{q_n}-k_1 \alpha \right\| < 2 \lambda^{(n)}.\] On the other hand, since $k_2-k_1 < q_n$, we know that $\|(k_2-k_1) \alpha \|$ must be at least $\lambda^{(n-1)}$. But, by \eqref{Eqaeta}, \[\lambda^{(n-1)} = a_{n+1} \lambda^{(n)} + \lambda^{(n+1)} \geq 2 \lambda^{(n)},\] a contradiction. \end{proof} \begin{lemma}\label{sumovernonzerorationals} Let $q$ be a positive integer and let $0<\delta< \frac{1}{q}$. Let $x_1, \ldots, x_{q-1} \in \mathbb{T}$ be such that $\|x_k - \frac{k}{q}\| < \delta$ for every $1 \leq k \leq q-1$. Then \[ \sum_{k=1}^{q-1} \frac{1}{\| x_k\|} \leq \frac{2q}{1-\delta q}\log(3q).\] \end{lemma} \begin{proof} Let $0<\hat{x}_1< \ldots < \hat{x}_{q-1} < 1$ be representatives of $x_1, \ldots, x_{q-1}$ in the fundamental domain $[0,1)$ of $\mathbb{T}$. In order to prove the lemma, it suffices to prove that \[\sum_{k=1}^{q-1} \psi_i(\hat{x}_k) \leq \frac{q}{1-\delta q} \log(3 q) \] for $i=1,2$. (Recall the definition of $\psi_1, \psi_2$ in (\ref{EqDefPsi})). By hypothesis, \[\hat{x}_k > \frac{k}{q}-\delta = \frac{k}{q}\left( 1-\frac{ \delta q}{k} \right) \geq \frac{k}{q} \left( 1-\delta q \right) \] for every $1 \leq k \leq q-1$. Hence, according to Lemma~\ref{LemSerHarmo}, we have \[\sum_{k=1}^{q-1} \psi_1(\hat{x}_k) < \sum_{k=1}^{q-1} \frac{1}{\frac{k}{q}(1-q \delta)} = \left( \frac{q}{1-\delta q}\right) \sum_{k=1}^{q-1} \frac{1}{k}<\frac{q}{1-\delta q} \log(3q).\] The estimate for $\psi_2$ is analogous. \end{proof} \begin{lemma} \label{LemSizeDiverg} Let $n \geq 11$ and $0<\epsilon<1$. Suppose that $a_{n+1} \geq 2$ and consider $x \in \mathbb{T}$ such that for some $0<i<q_n$ we have \[\|x+i \alpha \| < \frac{ \epsilon }{q_n \log(3 q_n) }.\] Then \[\frac{6 \epsilon}{\|x+i \alpha \|} > \sum_{k=0}^{i-1} \frac{1}{\|x+k\alpha\|}.\] \end{lemma} \begin{proof} Let $\gamma = \epsilon/(q_n \log(3 q_n))$ and take $\delta = \lambda^{(n)} + \gamma$. From Lemma~\ref{properties} we have \[q_n \lambda^{(n)} < \frac{q_n}{q_{n+1}}< \frac{q_n}{q_n a_{n+1}} \leq \frac{1}{2}.\] If $\alpha$ is the golden mean, then $q_{11} = 144$ (the $11^{th}$ element of the Fibonacci sequence) and $\log(3 q_{11}) > 6$. For every other value of $\alpha$ we have $q_{11} \geq 144$, whence it follows that $\log(3 q_n) >6$ for every $n \geq 11$. Hence $q_n \gamma < \frac{1}{6}$, so we have $q_n \delta < \frac{2}{3}$ and so \begin{equation}\label{Eqdeltaqn} \frac{1}{1-\delta q_n} < 3. \end{equation} Let $\sigma$ be as in Lemma~\ref{near rational orbit}. Let $\iota:\{0, \ldots q_n-1\} \to \{0, \ldots q_n -1\}$ be the natural involution given by $\iota(k) = [-k]_{q_n}$, where $[j]_{q_n}$ is the unique element in $\{ 0,\ldots, q_n-1 \}$ such that $[j]_{q_n} \equiv j \mod q_n$, and denote by $\tilde{\sigma}$ the composition $\iota \circ \sigma$. Then, according to Lemma~\ref{near rational orbit} we have \[\left\| -k \alpha - \frac{\tilde{\sigma}(k)}{q_n} \right\| = \left\| \frac{\sigma(k)}{q_n} - k \alpha \right\| < \lambda^{(n)}\] for every $0 \leq k < q_i$. By hypothesis, $0<i< q_n$ is such that $\|x+i\alpha \| < \gamma$. Thus \[ \left\|x+(i-k)\alpha - \frac{\tilde{\sigma}(k)}{q_n} \right\|\leq \|x+i \alpha \|+ \left\|-k \alpha - \frac{\tilde{\sigma}(k)}{q_n} \right\| < \gamma + \lambda^{(n)} = \delta. \] Hence, denoting $x_k= x+(i- \tilde{\sigma}^{-1}(k))\alpha$, we have \begin{equation}\label{EqInfDelta} \left\|x_k-\frac{k}{q_n}\right\| \le \delta. \end{equation} But \[\sum_{k=0}^{i-1} \frac{1}{\|x+k\alpha\| } = \sum_{k=1}^{i} \frac{1}{\| x + (i-k)\alpha \|} = \sum_{k=1}^i \frac{1}{\|x_{\tilde{\sigma}^{-1}(k)}\|} \leq \sum_{k=1}^{q_n-1} \frac{1}{\|x_k\|},\] (the fact that $x_0$ does not appear in the last sum comes from the fact that $\tilde \sigma(0) = 0$), and \eqref{EqInfDelta} allows us to apply Lemma~\ref{sumovernonzerorationals}: \[\sum_{k=0}^{i-1} \frac{1}{\|x+k\alpha\| }\leq \frac{2q_n}{1-\delta q_n}\log(3q_n).\] Combining it with \eqref{Eqdeltaqn} we get \[\sum_{k=0}^{i-1} \frac{1}{\|x+k\alpha\| }\leq 6 q_n \log(3 q_n),\] hence \[ \frac{6 \epsilon}{\|x+i\alpha\|} > 6 q_n \log(3 q_n) \geq \sum_{k=0}^{i-1} \frac{1}{\| x+k \alpha \|} \] as required. \end{proof} \begin{lemma}\label{sumoverrationals} Let $q$ be a positive integer, $A>0$, and $\beta \in \mathbb{T}$ such that \[\left\| \frac{n}{q} - \beta \right\| >\frac{A}{q} \quad \forall \ 0 \leq n < q.\] Then \begin{equation*} \sum_{n=0}^{q-1} \frac{1}{\| \frac{n}{q} - \beta \| } < 2 q \big(A^{-1}+\log( A^{-1} q)\big). \end{equation*} \end{lemma} \begin{proof} Let $X = \{ \frac{n}{q}- \beta \mod 1: 0 \leq n < q \} \subset (0,1)$ and write $X = \{ x_1, \ldots, x_q \}$ in such a way that \[ x_1 < x_2 < \ldots < x_q.\] By hypothesis we have that $x_1\geq \frac{A}{q}$ and $x_q \leq 1-\frac{A}{q}$. Note also that \[x_{k+1}-x_{k} = \frac{1}{q} \] for every $1 \leq k < q$. Then, by \eqref{EqDefPsi}, \begin{equation}\label{fromtwosides} \sum_{n=0}^{q-1} \frac{1}{\|\frac{n}{q}- \beta\|} \leq \sum_{k=1}^q \psi_1(x_k) + \psi_2 (x_k). \end{equation} Note that $\frac{1}{q} \sum_{k=2}^q \psi_1(x_k)$ is a lower Riemann sum for $\int_{x_1}^{x_q} \psi_1(t) \ dt$. Hence (as $x_2\ge 1/q$) \[\sum_{k=1}^q \psi_1(x_k) < \psi_1(x_1) + q \int_{x_1}^{x_q} \frac{dt}{t} < A^{-1} q+\log\left(\frac{x_q}{x_1}\right) < q(A^{-1} + \log( A^{-1} q )).\] Similarily, we have \[\sum_{k=1}^q \psi_2(x_k) < q(A^{-1} + \log(A^{-1} q)),\] and the proof follows from (\ref{fromtwosides}). \end{proof} Recall that our goal is to get bounds over the quantity $\Theta_n^\beta(x) = S_n(x)/S_n(x-\beta)$ (see \eqref{DefTheta}). It amounts to bound from above/below both $S_n(x)$ and $S_n(x-\beta)$, which will be done in Lemmas~\ref{ABC} and \ref{lower bound}. For each non-negative $n$ and positive $\ell < q_{n+1}/q_n$, denote by $E_{n,\ell}$ the $\frac{\lambda^{(n)}}{2}$-neighbourhood of the orbit $\mathcal{O}(\ell q_n)$, that is, \begin{equation}\label{DefE} E_{n, \ell} = \bigcup_{k=0}^{\ell q_n - 1} R_\alpha^{-k}(I_n). \end{equation} where $I_n = \big(-\frac{\lambda^{(n)}}{2},\frac{\lambda^{(n)}}{2} \big)$. \begin{lemma} \label{size} One has $\boldsymbol{\lambda}(E_{n, \ell})=\ell q_n \lambda^{(n)} $. \end{lemma} \begin{proof} Note that $\boldsymbol{\lambda}(R_\alpha^{-k}(I_n)) = \boldsymbol{\lambda}(I_n) = \lambda^{(n)}$ and that $\sum_{k=0}^{\ell q_n-1} \boldsymbol{\lambda}(R_\alpha^{-k}(I_n)) = \ell q_n \lambda^{(n)}$, so the proof follows if we can show that the sets $R_\alpha^{-k}(I_n)$, $k = 0 , \ldots , \ell q_n-1$ are pairwise disjoint. Suppose they are not. Then there exist $0\leq k < \ell < q_{n+1}$ such that $\| k\alpha - \ell \alpha \|< \lambda^{(n)}$. Writing $m = |k-\ell|$ that gives us $\|m \alpha \| < \lambda^{(n)}$ for some $m< q_{n+1}$. But that is absurd, since $q_{n+1}$ is the smallest number with this property. \end{proof} \begin{lemma} \label{ABC} Fix some $n>0$ and suppose that there are numbers $0<B < A < \frac{1}{2}$, a positive integer $\ell$ and some $\beta \in \mathbb{T}$ such that \[\left\| \frac{i}{q_n}-\beta \right\| \geq \frac{A}{q_n}\] for every $0 \leq i < q_n$, and that \[\ell+1 \leq B a_{n+1}.\] Then, given any $x \in E_{n, \ell}$, we have \[\sum_{k=0}^{\ell q_n - 1} \frac{1}{\|x+k\alpha - \beta \|} < \frac{2 \ell q_n ( A^{-1} + \log( A^{-1} q_n))}{1-B/A} .\] \end{lemma} \begin{proof} Note that \[\sum_{k=0}^{\ell q_n -1} \frac{1}{\|x + k \alpha - \beta \|} = \sum_{r=0}^{\ell-1} \sum_{s=0}^{q_n-1} \frac{1}{\|x + (r q_n+s) \alpha - \beta \| }.\] Thus it suffices to show that \[ \sum_{s=0}^{q_n-1} \frac{1}{\|x + (r q_n+s) \alpha - \beta \| } < \frac{2 q_n ( A^{-1} + \log( A^{-1} q_n))}{1-B/A}\] for every $0 \leq r < \ell$. Fix some $x \in E_{n, \ell}$. Then, by the definition of $E_{n, \ell}$ there exists $0 \leq k < \ell q_n$ such that $\|x+k \alpha \| < \lambda^{(n)} /2$. Let $0 \leq c < \ell$ and $0 \leq d < q_n$ be such that $k = c q_n + d$. Let \[\sigma: \{0, \ldots, q_n-1 \} \to \{0, \ldots, q_n-1\}\] be as in Lemma~\ref{near rational orbit}. (The inequality $\ell+1 < B a_{n+1}$ implies that $a_{n+1} >4$ so that Lemma~\ref{near rational orbit} applies.) Given an integer $i$, let $[i]_{q_n}$ denote the unique integer $0 \leq m < q_n$ such that $i \equiv m \mod q_n$. Then, for every $0 \leq r < \ell$ and $0 \leq s < q_n$ we have \begin{multline*} \big\| x + (r q_n + s) \alpha - \beta \big\| = \big\| x + (s-d) \alpha +(r-c) q_n \alpha + k \alpha - \beta \big\| \\ \geq \left\| \frac{\sigma([s-d]_{q_n})}{q_n} - \beta \right\| - \left\| (s-d) \alpha- \frac{\sigma([s-d]_{q_n})}{q_n} +(r-c) q_n \alpha + x + k \alpha \right\|. \end{multline*} By hypothesis, \[ \left\| \frac{\sigma([s-d]_{q_n})}{q_n} - \beta \right\| > \frac{A}{q_n} > A \lambda^{(n-1)}\] for every $0 \leq s < q_n$. Moreover, using the inequality $|r-c|\leq \ell-1$ and Lemma~\ref{near rational orbit}, \begin{align*} \left\| (s-d) \alpha- \frac{\sigma([s-d]_{q_n})}{q_n} +(r-c) q_n \alpha + x + k \alpha\right\| \leq & \left\| (s-d) \alpha- \frac{\sigma([s-d]_{q_n})}{q_n} \right\| \\ & + |r-c| \|q_n \alpha \| + \| x+k \alpha \| \\ \leq &\ \lambda^{(n)} + |r-c| \lambda^{(n)} + \frac{\lambda^{(n)}}{2} \\ \leq &\ (\ell+\frac{1}{2}) \lambda^{(n)} < B a_{n+1} \lambda^{(n)}\\ < &\ B \lambda^{(n-1)}. \end{align*} Consequently, \[ \big\| x + (r q_n + s) \alpha - \beta \big\| > (1-B/A) \left\| \frac{\sigma([s-d]_{q_n})}{q_n} - \beta \right\|.\] Taking reciprocals while summing over $s$ and applying Lemma~\ref{sumoverrationals} gives \begin{align*} \sum_{s=0}^{q_n-1} \frac{1}{\| x + (r q_n + s) \alpha - \beta \|} & < \frac{1}{1-B/A} \sum_{s=0}^{q_n-1} \frac{1}{\| \frac{\sigma([s-d]_{q_n})}{q_n} - \beta \| } \\ & = \frac{1}{1-B/A} \sum_{j=0}^{q_n-1} \frac{1}{\| \frac{j}{q_n} - \beta \|} \\ & < \frac{2 q_n( A^{-1} + \log(A^{-1}q_n))}{1-B/A}, \end{align*} as required. \end{proof} \begin{lemma} \label{lower bound} Let $\alpha\notin \mathbb{Q}$. Fix some $n>0$ and let $\ell \geq 1$ be such that $\ell q_n < q_{n+1}$. Let $E_{n, \ell}$ be as in (\ref{DefE}). Then, for any $x\in E_{n, \ell}$, \[\sum_{k=0}^{\ell q_n-1} \frac{1}{\|x + k \alpha \|} \geq \frac{ \log \ell}{\lambda^{(n)}}.\] \end{lemma} \begin{remark}\label{lower bound2} Replacing the interval $I_n$ by $\tilde I_n = (-2\lambda^{(n)},2\lambda^{(n)})$, and the set $E_{\ell,n}$ by $\tilde E_{\ell,n}$ accordingly, one gets a similar result: \[\sum_{k=0}^{\ell q_n-1} \frac{1}{\|x + k \alpha \|} \geq \frac{ \log \ell}{4\lambda^{(n)}}.\] \end{remark} The proof is based on taking into account only the contribution of points of the ``ground floor'' of the renormalization interval. \begin{proof} Fix some $x \in E_{n, \ell}$. Then, by definition of $E_{n, \ell}$, there exists some $0\leq m < \ell q_n$ such that $\|x+m \alpha \|< \frac{\lambda^{(n)}}{2}$. Let $0\leq c<\ell$ and $0 \leq d < q_n$ be integers such that $m=c q_n + d$. Then \begin{align*} \sum_{k=0}^{\ell q_n - 1} \frac{1}{\|x + k \alpha \|} & = \sum_{r=0}^{\ell-1} \sum_{s=0}^{q_n-1} \frac{1}{\| x + (r q_n+s) \alpha \|} \\ & > \sum_{r=0}^{\ell-1} \frac{1}{\|(x+ (r q_n+d) \alpha\|} \\ & = \sum_{r=0}^{\ell-1} \frac{1}{\|x' + (r-c) q_n \alpha \|}, \end{align*} where $x' = x+m \alpha$. Note that \[\| x' + (r-c) q_n \alpha \| \leq \|(r-c)q_n \alpha \| + \|x'\| < |r-c| \lambda^{(n)} + \frac{\lambda^{(n)}}{2}.\] Hence \begin{equation}\label{riemannsum} \sum_{r=0}^{\ell-1} \frac{1}{\| x' + (r-c) q_n \alpha \|} \geq \sum_{r=0}^{\ell-1} \frac{1}{\lambda^{(n)} ( |r-c|+ \frac{1}{2})} \geq \sum_{r=0}^{\ell-1} \frac{1}{\lambda^{(n)}(r+\frac{1}{2})}. \end{equation} From that the lemma follows easily, since (\ref{riemannsum}) is an upper Riemann sum of the integral \[\int_{\frac{1}{2}}^{\ell+\frac{1}{2}} \frac{dx}{\lambda^{(n)} x}, \] whose value is greater than $ \frac{\log \ell}{\lambda^{(n)}}$. \end{proof} We end this section by a lemma that will be used in the next one. \begin{lemma}\label{smallest distance} Let $a, b, q$ be positive integers, with $b \geq 2$. Suppose that $a$ and $b$ are coprime and also that $b$ and $q$ are coprime. Then \begin{equation*} \left\| \frac{n}{q}- \frac{a}{b} \right\| \geq \frac{1}{bq} \end{equation*} for every $n \in \mathbb{Z}$. \end{lemma} \begin{proof} It follows from $\gcd(a,b) = \gcd(b,q) = 1$ that $a q \not\equiv 0 \mod b$. Thus \[b(n+mq)-aq \neq 0\] for every $n, m \in \mathbb{Z}$, and hence \[ \left| \frac{n}{q} - \frac{a}{b} +m \right| = \left| \frac{b(n+mq)-aq}{bq} \right| \geq \frac{1}{bq}, \] proving the lemma. \end{proof} \section{Extreme historic behaviour} \label{SecLiou} This section is devoted to two theorems on the existence of reparameterized linear flows with extreme historic behaviour. Theorem~\ref{refined rational distinct orbits} deals with stopping points on rationally separated orbits whereas Theorem~\ref{generic distinct orbits} deals with the generic case. \subsection{Precise statements and sketch of proofs} We start by identifying the set of angles for which we are going to prove that the conclusion of Theorem~\ref{rational distinct orbits} holds before stating a more precise version of it. \begin{definition} Let $\nu$ be a positive number, $k \geq 2$ be an integer, and $\alpha = [a_0; a_1, \ldots]$ an irrational number with convergents $p_n/q_n$. We say that $\alpha$ is \emph{$(\nu, k)$-approximable} if there are infinitely many $n \in \mathbb{N}$ such that \[ \begin{cases} a_{n+1} \geq q_n^\nu, \text{ and} \\ \gcd(q_n, k) = 1. \end{cases} \] Let $\mathcal{W}(\nu,k)$ denote the set of numbers that are $(k, \nu)$-approximable. Let \[\mathcal{W}(\nu) = \bigcap_{k\geq 2} \mathcal{W}(\nu,k) \] and \[\mathcal{W} = \bigcup_{\nu>0} \mathcal{W}(\nu).\] \end{definition} \begin{proposition}\label{W is generic} For every $\nu>0$ and every integer $k \geq 2$, the set $\mathcal{W}(\nu,k)$ is a dense $G_\delta$ subset of $\mathbb{R}$. \end{proposition} Since $\mathcal{W}(\nu,k)$ is a dense $G_\delta$ set, so is $\mathcal{W}(\nu)$ for every $\nu$. The proof of Proposition~\ref{W is generic} is a straightforward $G_\delta$ argument, but it relies on the fact that we may make small alterations to $\alpha$ to obtain $\gcd(q_n,k) =1$ for large $n$. The following lemma ensures that this is possible. \begin{lemma} \label{pqk} Let $a,b,c$ be positive integers such that $a$ and $b$ are coprime. Then there exists a positive integer $i$ such that $a+ib$ and $c$ are coprime. \end{lemma} \begin{proof} Let $i$ be the product of all prime factors of $c$ that do not divide $ab$, if such factors exist. Otherwise let $i=1$. Note that $a$, $b$, and $i$ have no common factors, and that every prime factor of $c$ divides $abi$. We claim that $c$ and $a+ib$ are coprime. Indeed, suppose that $p$ is a prime factor of $c$. Then $p \mid abi$. If $p$ is a factor of $a$ then $p$ is not a factor of $bi$. In particular $p \nmid a+ib$. If $p$ is not a factor of $a$, then $p$ is a factor of $bi$. Here again $p \nmid a+ib$. We have shown that there is no prime number that divides both $c$ and $a+ib$. \end{proof} \begin{proof}[Proof of Proposition~\ref{W is generic}] Fix $\nu>0$ and an integer $k \geq 2$. Let us denote by $C(a_0; a_1, \ldots, a_n)$ the open cylinder set \[ \left\{ a_0 + \frac{1}{ a_1+ \frac{1}{ \ddots \ +\frac{1}{a_n+y} }}: 0<y<1 \right\} \] Let $\mathcal{C}_n$ be the collection of all cylinders of the form $C(a_0;a_1, \ldots, a_n)$. If $\alpha$ and $\alpha'$ belong to the same cylinder $C(a_0;a_1, \ldots, a_n)$, then \[ \|\alpha-\alpha'\| \leq \|\alpha-\frac{p_n}{q_n}\| + \| \alpha'-\frac{p_n}{q_n}\| < \frac{2}{q_{n+1}} \leq 2 \cdot 2^{-\lfloor n/2 \rfloor}\] so that the diameter of cylinders in $\mathcal{C}_n$ converge to zero uniformly as $ n \to \infty$. Let $\mathcal{A}_n$ be the collection of cylinders on which $\gcd(q_n,k)=1$ is satisfied. (Note that $q_n=q_n(\alpha)$ is constant on cylinders in $\mathcal{C}_n$.) We claim that any open set in $\mathbb{R}$ contains an element of $\mathcal{A}_n$. Indeed, sine the diameter of cylinders in $\mathcal{C}_n$ tend uniformly to zero, any open set contains a cylinder in $\mathcal{C}_{n-1}$, $C(a_0; a_1, \ldots, a_{n-1})$ say, for $n$ sufficiently large. Now, by Lemma \ref{pqk}, we may choose a number $a_n \geq 1$ such that $q_n = q_{n-2}+ a_n q_{n-1}$ and $k$ are coprime. Hence $C(a_0; a_1, \ldots, a_n) \in \mathcal{A}_n$. We have proved that \[\bigcup_{n\geq m} \bigcup_{C \in \mathcal{A}_n}C\] is dense in $\mathbb{T}$ for every $m$. Let $\mathcal{B}_{n+1}$ be the collection of cylinders of the form $C(a_0; a_1, \ldots, a_{n+1})$ such that \begin{enumerate} \item $C(a_0;a_1, \ldots, a_n) \in \mathcal{A}_n$, and \item points in $C(a_0, a_1, \ldots, a_{n+1})$ satisfy $a_{n+1} \geq q_n^\nu$. \end{enumerate} It is clear that if $C(a_0;a_1, \ldots,a_n)$ belongs to $\mathcal{A}_n$ then $C(a_0; a_1, \ldots, a_n, \ell)$ belongs to $\mathcal{B}_{n+1}$ for $\ell$ sufficiently large. In other words, each $C \in \mathcal{A}_n$ contains a subcylinder $C' \in \mathcal{B}_{n+1}$. Consequently \[O_m = \bigcup_{n\geq m} \bigcup_{C \in \mathcal{B}_n} C\] is dense in $\mathbb{T}$. The proof follows by observing that \[\mathcal{W}(\nu,k) = \bigcap_{m} O_m.\] \end{proof} \begin{theorem}[refined Theorem~\ref{rational distinct orbits}]\label{refined rational distinct orbits} Let $\mathbf{p}=(0,0)$ and $\mathbf{q} = (0, \frac{a}{b})$, where $\gcd(a,b)=1$, and suppose that $\alpha \in \mathcal{W}(\nu,b)$ for some $\nu>0$. If $\phi^t$ is a reparametrized linear flow satisfying (SH), then it has an extreme historic behaviour. \end{theorem} We now turn to the generic case. \begin{definition} We say that $\alpha = [a_0; a_1, a_2, \ldots ]$ is a Liouville number if, given any $k>0$ there are infinitely many values of $n$ for which \[a_{n+1} > q_n^k\] holds. \end{definition} \begin{remark} The above definition Liouville number is stated in a form suited for the needs in this this paper. Is distinct from, but equivalent to, the standard definition that $\alpha$ is Liouville if, given any positive integer $k$, there exist $p,q \in \mathbb{Z}$ such that \[\left| \alpha - \frac{p}{q} \right| < \frac{1}{q^k}. \] \end{remark} \begin{theorem} \label{generic distinct orbits} Let $\alpha$ be a Liouville number. Then there exists a dense $G_\delta$ set $B \subset \mathbb{T}$ such that if $\beta \in B$, then any reparameterized linear flow satisfying (SH) with angle $\alpha$ and stopping points at $(0,0)$ and $(0,\beta)$ has an extreme historic behaviour. \end{theorem} The proofs of Theorems~\ref{refined rational distinct orbits} and \ref{generic distinct orbits} are based on some rather simple ideas, as we now shall explain. Propositions \ref{criterium2} and \ref{criterium3} tell us that in order to detect an extreme historic behaviour for a reparameterized flow with stopping points at $\mathbf{p}=(0,0)$ and $\mathbf{q}=(0, \beta)$, we must show that the ratio between $S_m(x)$ and $S_m(x-\beta)$ can be made larger (or smaller) than an arbitrary constant on a set of substantial measure. We now explain how this is done. Consider a situation in which $\alpha$ has two successive convergents $p_n/q_n$ and $p_{n+1}/q_{n+1}$ such that $q_{n+1}$ is very large compared to $q_n$. Then the orbit $\mathcal{O}(q_n)$ is very close to the set $\{k/q_n: 0 \leq k < q_n \}$ (Lemma~\ref{near rational orbit}). Suppose that $\beta$ happens to lie at a safe distance ($> A \lambda^{(n-1)} \approx A/ q_n$) from this orbit. Then $R_\beta(\mathcal{O}(q_n))$ --- the orbit of length $q_n$ starting from $\beta$ --- is intertwined with that of $\mathcal{O}(q_n)$ so that each gap of one orbit contains exactly on point of the other orbit and vice versa. Moreover, points of the two orbits are separated from one another by a distance of order $A \lambda^{(n-1)}$. One way to guarantee that $\beta$ is on a safe distance from the orbit $\mathcal{O}(q_n)$ is to take $\beta$ to be rational of the form $\frac{a}{b}$, and ask for $q_n$ and $b$ to be coprime (Lemma~\ref{smallest distance}). This is why we take $\alpha \in \mathcal{L}_b$ in Theorem~\ref{refined rational distinct orbits}. Another way is to simply move $\beta$ so that it lies more or less in the middle of one of the gaps defined by the orbit $\mathcal{O}(q_n)$. This is the idea exploited in the proof of Theorem~\ref{generic distinct orbits}. Now, since $q_{n+1}$ is much larger than $q_n$ we have that (see Lemma~\ref{properties}) $q_{n+1} \approx a_{n+1} q_n$ and also that $\lambda^{(n+1)} \approx a_{n+1} \lambda^{(n)}$ (see \eqref{Eqaeta}). Let $\ell$ be some integer approximately equal to $B a_{n+1}$ for some fixed $0<B<A$. Then the orbit $\mathcal{O}(\ell q_n)$ is the union of $q_n$ small ``blocks'', each of length $\ell$ (Equation\eqref{EqOrbit}). If $B$ is not too big, the orbits $\mathcal{O}(\ell q_n)$ and $R_\beta(\mathcal{O}(\ell q_n))$ are still on a safe distance from one another. Figure~\ref{orbits} illustrates this in a situation where $q_n=7$ and $\ell = 4$. \begin{figure}\label{orbits \includegraphics[trim=90pt 90pt 70pt 70pt, clip, scale=0.5]{orbitstructure-with-numbers.pdf} \caption{Illustration of an orbit (dots) of length $\ell q_n$ where $q_n= 7$ and $\ell = 4$, together with its rotation (crosses). Each orbit has seven blocks of four points each. The distance between corresponding points in adjacent blocks is approximately $1/q_n$, whereas the distance between points within the same block is approximately $1/q_{n+1}$. In this figure, $q_{n+1}$ is about ten times larger than $q_n$ so that each block of lenght four 'fills up' nearly half the gap between points of the orbit of length $q_n$. The proof of theorem \ref{refined rational distinct orbits} requires the ration between $q_{n+1}$ and $q_n$ to be larger than some fixed positive power of $q_n$ for infinitely many $n$, while the proof of theorem \ref{generic distinct orbits} requires the same ratio to be larger than any power of $q_n$.} \end{figure} Let $E_{n, \ell}$ be an $\lambda^{(n)}/2$-neighbourhood of $\mathcal{O}(\ell q_n)$. Then, modulo a finite number of point , $E_{n, \ell}$ is a disjoint union of $q_n$ intervals of size $\approx B \lambda^{(n-1)} \approx B /q_n$, each interval corresponding to a 'block'. Now let us rotate this set $E_{n, \ell}$ by the angle $-\ell q_n \alpha$. The resulting set, $E_{n, \ell}'$ is then a $\lambda^{(n)}/2$-neighborhood of the pre-orbit of length $\ell q_n$ of the point $0$. Thus a point $x$ belongs to $E_{n, \ell}'$ if and only if one of its first $\ell q_n$ iterates lies within an $\lambda^{(n)}/2$-distance from $0$. In this case Lemma~\ref{lower bound} tells us that $S_m ( x) $ is at least of order $(\log \ell) / \lambda^{(n)} \approx a_{n+1} q_n \log \ell $. On the other hand, $S_m(x-\beta)$ is at most of order $B a_{n+1} q_n \log(q_n)$ (Lemma~\ref{ABC}). Thus in order to have $S_m(x)$ larger than, say, $K S_m(x-\beta)$, we impose the condition that $a_{n+1}$ (and hence $\ell$) be of order $q_n^{BK}$. Since $E_{n, \ell} '$ consists of $\ell q_n $ disjoint intervals of length $\lambda^{(n)}$, its $\boldsymbol{\lambda}$-measure is equal to $\ell q_n \lambda^{(n)} \approx B a_{n+1} q_n \lambda^{(n)} \approx B$. In the proof of Theorem~\ref{generic distinct orbits}, since $\alpha$ is Liouville, $a_{n+1}$ will be larger than $q_n^{BK}$ infinitely often, whatever the value of $BK$. Hence the value of $B$ is uniformly bounded away from zero (i.e. does not depend on $K$). On the other hand, in Theorem~\ref{refined rational distinct orbits}, $BK$ has to be of order $\nu$, so $B$ needs to be taken smaller as we increase $K$. \subsection{Proof of Theorem~\ref{refined rational distinct orbits}} We fix some $\beta \in \mathbb{Q} \setminus \mathbb{Z}$ and write $\beta = \frac{a}{b}$, with $\gcd(a,b)=1$. Fix also some $\nu>0$ and $\alpha \in \mathcal{W}(\nu,b)$. We shall prove that, given any $K>1$ and $n \in \mathbb{N}$, there exists an integer $m \geq n$ and a set $E_{n, \ell} \subset \mathbb{T}$ with $\boldsymbol{\lambda}(E_{n, \ell}) \geq \nu/(64bK)$ such that \begin{equation}\label{dominance1} \sum_{k=0}^{m-1}\frac{1}{\|k \alpha + x \|} > K \sum_{k=0}^{m-1} \frac{1}{\|k \alpha+x-\frac{a}{b} \|} \end{equation} holds for every $x \in E_{n, \ell}$. Then $\phi^t$ has an extreme historic behaviour according to Proposition~\ref{criterium3}. Fix $K>1$ and $N \in \mathbb{N}$. Upon possibly increasing $K$ we can (and do) suppose that $K> \nu/8$. Since $\alpha \in \mathcal{W}(\nu,b)$ there exists $n\geq N$ for which \[a_{n+1}>q_n^\nu\] and $\gcd(q_n,b)=1$. Pick such $n$, with the additional property that \begin{equation}\label{large q_i} q_n^{\nu/2} > \frac{16bK e^{\nu/2}}{\nu} \qquad \text{and}\qquad q_n^\nu > \frac{32 bK}{\nu}. \end{equation} Therefore we can choose an integer $\ell \geq 1$ such that \[2<\frac{\nu a_{n+1}}{16bK} < \ell < \ell+1 \leq \frac{\nu a_{n+1}}{8bK}.\] Let $E_{n, \ell}$ be as in (\ref{DefE}). We claim that (\ref{dominance1}) holds for $m=\ell q_n$ and any $x \in E_{n, \ell}$. Lemma~\ref{smallest distance} tells us that \[\left\|\frac{i}{q_n}- \frac{a}{b} \right\| \geq \frac{1}{b q_n}\] for every integer $i$. We can therefore apply Lemma~\ref{ABC} with $A=1/b$, $B=\nu/(8bK)$. Doing so gives (Recall that we are assuming that $K>\nu/4$.) \begin{align*} \sum_{k=0}^{\ell q_n-1} \frac{1}{\| x+k\alpha - \frac{a}{b} \|} & < \frac{2\ell q_n}{1-\frac{\nu}{8K}}(b + \log(bq_n))\\ & < 4 \ell q_n (b+\log(bq_n)) \end{align*} for every $x \in E_{n, \ell}$. Moreover, as $K>\nu/8>\nu/(8b)$), we have \[\ell q_n < \frac{\nu a_{n+1}q_n}{8bK} \le \frac{\nu q_{n+1}}{8bK} < q_{n+1}.\] We may therefore apply Lemma~\ref{lower bound}, obtaining the estimate \[\sum_{k=0}^{\ell q_n-1} \frac{1}{\| x+k\alpha\|} \geq \frac{\log \ell}{\lambda^{(n)}}.\] Thus in order to show (\ref{dominance1}), it suffices to show that \[\log \ell > 4 K \lambda^{(n)} \ell q_n (b+\log(bq_n)).\] But by Lemma~\ref{properties}, \[ \ell \lambda^{(n)} q_n < \frac{\nu a_{n+1} \lambda^{(n)} q_n}{8bK} < \frac{\nu \lambda^{(n-1)} q_n}{8bK} < \frac{\nu}{8bK}, \] so for $q_n$ large enough \[4K\lambda^{(n)} \ell q_n\big(b+\log(b q_n)\big) < \frac{\nu}{2b}\big(b+\log(bq_n)\big) < \frac{\nu}{2}\big(1+\log(q_n)\big).\] Hence it suffices to show that \[ \frac{\nu}{2} \big(1+\log(q_n)\big) < \log \ell. \] But $\ell$ was chosen so that \[\ell > \frac{\nu a_{n+1}}{16bK}> \frac{\nu}{16bK} q_n^\nu > e^{\nu/2} q_n^{\nu/2}\] in view of (\ref{large q_i}). We have therefore shown that (\ref{dominance1}) holds for $m=\ell q_n$ whenever $x \in E_{n, \ell}$. It remains to show that $\lambda(E_{n, \ell}) \geq \nu/(64 b K)$. Applying Lemma~\ref{size} to the set $E_{n, \ell}$ we see that \[\boldsymbol{\lambda}(E_{n, \ell}) = \ell q_n \lambda^{(n)} \geq \frac{\nu a_{n+1}}{16bK} \lambda^{(n)} q_n > \frac{\nu \lambda^{(n-1)}}{32 bK} q_n > \frac{\nu}{64bK}.\] This completes the proof. \subsection{Proof of Theorem~\ref{generic distinct orbits}} Fix some Liouville number $\alpha$. Let \[ C(n,K,\beta)= \big\{ x \in \mathbb{T}: \Theta_n^\beta (x)> K \big\},\] and \[D(n,K) = \big\{ \beta \in \mathbb{T}: \boldsymbol{\lambda}(C(n,K, \beta))>1/16 \big\}. \] Clearly, the sets $D(n,K)$ are open. Let \[\mathcal{R} = \bigcap_{K > 1} \bigcup_{n \geq 1} D(n,K).\] Then according to Proposition~\ref{criterium2}, any reparameterized flow satisfying (SH) with $ \beta \in \mathcal{R} $ has an extreme historic behaviour. Thus in order to prove Theorem~\ref{generic distinct orbits} it suffices to prove that $\bigcup_{n \geq 1} D(n,K)$ is dense in $\mathbb{T}$ for every $K>1$. To this end, we fix $K>1$, $\beta_0 \in \mathbb{T}$ and $\epsilon>0$ arbitrarily. We shall prove that there is some $m \geq 1$ and $ \beta \in \mathbb{T}$ with $|\beta-\beta_0| < \epsilon$ such that $\beta \in D(m, K)$. Write $\alpha$ as $[a_0; a_1, a_2, \ldots]$ and let $p_n/q_n$ be its convergents. Then fix some $n \in \mathbb{N}$ such that $1/q_n < \epsilon$ and $a_{n+1} > q_n^{K+1}$. Upon possibly increasing $n$ we can (and do) assume that $q_n > n$ and also that $q_n > 8 e^{2K} 2^K$ (which in particular is larger than $16$). Chose an integer $0 \leq b < q_n$ such that \[\left\|\beta_0 - \left(\frac{b}{q_n} + \frac{1}{2 q_n} \right) \right\| \le \epsilon \] and let \[ \beta = \frac{b}{q_n} + \frac{1}{2 q_n} .\] Since $a_{n+1} > 16$ we can (and do) fix $ \ell \in \mathbb{N}$ such that \[\frac{a_{n+1}}{8} < \ell< \ell+1 < \frac{a_{n+1}}{4}.\] Let $m = \ell q_n$. We claim that $ \beta \in D(m,K)$. More specifically, let $E_{n,\ell}$ be as in (\ref{DefE}). We shall prove that $E_{n, \ell} \subset C(m, K, \beta)$ and that $\boldsymbol{\lambda}(E_{n, \ell}) \geq 1/16$. Indeed, the latter follows from Lemma~\ref{size}, our choice of $\ell > a_{n+1}/8$ and the inequality $a_{n+1} \lambda^{(n)} \geq \lambda^{(n-1)}/2$. Now fix $x \in E_{n, \ell}$. We shall prove that \begin{equation}\label{dominance4} \sum_{k=0}^{m-1}\frac{1}{\|k \alpha + x \|} > K \sum_{k=0}^{m-1} \frac{1}{\|k \alpha+x-\beta \|}. \end{equation} Our choice of $\beta$ is such that \[ \left\| \frac{i}{q_n} - \beta \right\| \geq \frac{1}{2 q_n} \] for every $0 \leq i < q_n$. We can therefore apply Lemma~\ref{ABC} with $A = 1/2$ and $B=1/4$. Doing so gives us the estimate \[K \sum_{k=0}^{m-1} \frac{1}{\|k \alpha+x-\beta \|} < 4 K \ell q_n ( 2 + \log(2 q_n))\ .\] From Lemma~\ref{lower bound} we have \[ \sum_{k=0}^{\ell q_n-1} \frac{1}{\|x + k \alpha \|} \geq \frac{ \log \ell}{\lambda^{(n)}} \] for every $x \in E_{n, \ell}$. Thus in order to prove that \eqref{dominance4} holds, it suffices to show that \ \[\frac{\log \ell}{\lambda^{(n)}} > 4K \ell q_n (2 + \log(2 q_n)) .\] But we have chosen $\ell$ such that (using Lemma~\ref{properties}) \[\ell \lambda^{(n)} q_n < \frac{a_{n+1} \lambda^{(n)} q_n}{4} < \frac{\lambda^{(n-1)} q_n}{4} < \frac{1}{4}.\] It therefore suffices to verify that $\log \ell > K(2+\log(2 q_n))$. But \[\ell>\frac{a_{n+1}}{8} > \frac{q_n^{K+1}}{8} > \frac{q_n^K 8e^{2K}2^K}{8},\] which implies the required property. \section{Divergence of sums: Diophantine case} In this section, we use the estimates we got from Section \ref{SecTech} and Diophantine properties of almost any number to get historic behaviour for almost any angle $\alpha$. \begin{theorem}\label{PropDivSum} Let $\alpha \in \mathbb{R}$ be such that $a_n \geq 2$ for infinitely many $n$ and \[\sum_{\substack{n\ge 2 \\ a_n, a_{n+1}\ge 2}} \frac{1}{\log q_n} = \infty.\] Then, given any $\mathbf{p}$ and $\mathbf{q}$, the reparameterized linear flow $\phi^t$ satisfying (SH) with stopping points at $\mathbf{p}$ and $\mathbf{q}$ has historic behaviour. When moreover $\mathbf{q}$ is on the positive orbit of $\mathbf{p}$, then the ergodic limit set of almost any point $\mathbf{x}$ is explicit: \[ p\omega(\mathbf{x}) = \left[\mu_\infty\, ,\ \delta_\mathbf{p}\right].\] \end{theorem} The fact that the set of angles $\alpha$ satisfying the hypotheses of this theorem is of full measure is a consequence of a theorem due to Khinchin and Levy, which asserts that for Lebesgue-almost every $\alpha \in \mathbb{R}$, the denominators of the convergents satisfy \[\lim_{n\to +\infty}\frac{\log q_n}{n} = \frac{\pi^2}{12 \log 2},\] hence \[\sum_{n\ge 0} \frac{1}{\log q_n} = +\infty.\] Moreover, for Lebesgue-almost every $\alpha \in \mathbb{R}$, and any $b\in\mathbb{N}$, one has \[\lim_{n\to +\infty} \frac{1}{n}\operatorname{card}\big\{j\le n : a_j = b\big\} = \log_2\left(\frac{(b+1)^2}{b(b+2)}\right),\] and the Gauss map is mixing, implying that for Lebesgue-almost every $\alpha \in \mathbb{R}$, and any $b,b'\in\mathbb{N}$, one has \[\lim_{n\to +\infty} \frac{1}{n}\operatorname{card}\big\{j\le n : a_j = b, a_{j+1} = b'\big\} = \log_2\left(\frac{(b+1)^2}{b(b+2)}\right)\log_2\left(\frac{(b'+1)^2}{b'(b'+2)}\right),\] For a proof see e.g. Propositions 3.1 and 3.4 of Durand \cite{Durand}. From this one can easily deduce the following\footnote{E.g. using the partition of $\mathbb{N}$ by intervals $[2^k, 2^{k+1})$.}: \begin{equation}\label{EqSumInvQ} \sum_{\substack{n\ge 0\\ a_n, a_{n+1}\ge 2}} \frac{1}{\log q_n} = +\infty \end{equation} It could be conjectured that there is an extreme historic behaviour property for almost any $\alpha$ and ``most of'' $\beta$; unfortunately we were only able to establish that the sequences $\Theta_k^\beta (x)$ fail to converge for almost every $x$: the theorem's proof tells us that the sequences $\Theta_k^\beta (x)$ have at least $0$ or $+\infty$ as a limit point, hence that $p\omega(\mathbf{x})$ contains at least $\delta_\mathbf{p}$ or $\delta_\mathbf{q}$. The key property that allows us to conclude in the case where $\mathbf{q}$ is on the positive orbit of $\mathbf{p}$ is that $\liminf \Theta_k^\beta (x) = 1$. \medskip Let us move to the Theorem's proof. Fix $\alpha$ as in the hypothesis of Theorem~\ref{PropDivSum} and let $0< \epsilon_n<1$ be a decreasing sequence of positive numbers such that $\epsilon_n \to 0$ as $n \to \infty$ and satisfying \[\sum_{\substack{n \geq 2 \\ a_n,a_{n+1} \geq 2}} \frac{\epsilon_n}{\log (3q_n)} = \infty.\] For $n \geq 1$ let \[I_n = \left(-\frac{\epsilon_n}{q_n \log(3 q_n)}, \frac{\epsilon_n}{q_n \log(3q_n)} \right ) \] and set \[E_n = \bigcup_{i=0}^{q_n-1} R_\alpha^{-i} (I_n)\] and \[E = \bigcap_{N \geq 1 }\bigcup_{\substack{n \ge N\\ a_{n+1} \ge 2}} E_n.\] \begin{lemma} \label{LemEnLeb1} Under the hypotheses of Theorem~\ref{PropDivSum}, the set $E$ is of full $\boldsymbol{\lambda}$-measure. \end{lemma} We first show how to deduce Theorem~\ref{PropDivSum} from Lemma~\ref{LemEnLeb1}. Then we proceed to the proof of the lemma. \begin{proof}[Proof of Theorem~\ref{PropDivSum}] First, by Lemma~\ref{LemEnLeb1}, denoting $R_\alpha^{-\mathbb{N}}(0)$ the pre-orbit of $0$, the set \[E^* = E \setminus R_\alpha^{-\mathbb{N}}(0)\] has total measure. Recall that $\psi(x) = \|x\|^{-1}$. Let us prove that if $x \in E^*$, then for any $M\in\mathbb{N}$, there exists $i, n\in\mathbb{N}$ satisfying $M \le i \le q_n$ and such that \begin{equation}\label{EqThDioph} \psi\big(R_\alpha^i(x)\big) > \frac{1}{6 \epsilon_n} \,\sum_{j=0}^{i-1} \psi\big(R_\alpha^j(x)\big). \end{equation} Fix $M\in\mathbb{N}$ arbitrarily. As $x$ is not in the pre-orbit of $0$, one has $d = \min_{0\le i < M} \|R_\alpha^i(x)\|>0$. For any $N\ge M$ large enough, one has $d>1/(q_N \log(3q_N))$. As $x\in E_n$ for some $n\ge N$ with $a_{n+1}\ge 2$, we know that there exists $i < q_n$ such that $R_\alpha^{i}(x) \in I_n$. Hence, $\|R_\alpha^{ i}(x)\| < d $ and so $i\ge M$. Applying Lemma~\ref{LemSizeDiverg} with $\epsilon_n$ in place of $\epsilon$ yields \eqref{EqThDioph}. We will denote by $i_k, n_k$ some increasing sequences of numbers (depending \textit{a priori} on $x$) satisfying \eqref{EqThDioph}. Let $\Sigma = \{x_0\} \times \mathbb{T}$ be chosen as in Section \ref{SecDefFlow}, so that $p_0 \neq q_0$ and let $\beta = q_0 - p_0$. As $\beta \neq 0$, there exists $A>0$ such that if $\psi(y)>A$, then $\psi(y-\beta)<A$. Hence, for $k$ large enough, one has ($S_n$ is defined in \eqref{DefSn}) \[S_{i_k+1}(x) \ge \left(1+\frac{1}{6 \epsilon_{n_k}}\right) S_{i_k}(x) \quad \text{and} \quad S_{i_k+1}(x-\beta) \le S_{i_k}(x-\beta) + A.\] (the second inequality comes from the fact that $\lim_{k\to \infty}\psi\big(R_\alpha^{i_k}(x)\big) = \infty$ and thus $\psi\big(R_\alpha^{i_k}(x-\beta)\big)\le A$ for $k$ large enough.) Hence ($\Theta$ is defined in \eqref{DefTheta}), using the fact that $\lim_{k\to \infty} S_{i_k}(x-\beta) = \infty$, for any $k$ large enough, \[ \Theta_{i_k+1}^\beta(x) = \frac{S_{i_k+1}(x)}{S_{i_k+1}(x-\beta)}\ge \frac{(1+1/(6 \epsilon_{n_k}) ) S_{i_k}(x)}{S_{i_k}(x-\beta) + A} \ge \frac{1}{12 \epsilon_{n_k}} \Theta_{i_k}^\beta(x). \] Hence, there exists $(i_k) \to +\infty$ such that \begin{equation}\label{EqFinalCOntra2} \Theta_{i_k}^\beta(x) = o\left(\Theta_{i_k+1}^\beta(x)\right). \end{equation} But Proposition~\ref{PropPossibOmega} tells us that if $\phi^t$ has a physical measure, then it is equal to $\mu_\infty$, and Proposition~\ref{criterium1} tells us that in this case the sequences $\Theta_{n}^\beta(x)$ converge to a positive real number for almost every $x$. This is in contradiction with \eqref{EqFinalCOntra2}, so $\phi^t$ has an historic behaviour for almost any point. \medskip For the second part of the theorem, the hypothesis that $\mathbf{q}$ is on the positive orbit of $\mathbf{p}$ implies the existence of $j>0$ such that $\beta = j\alpha \mod 1$. Hence, for any $x\notin R_\alpha^{-\mathbb{N}}(0)$, and every $n \geq j$ we have \begin{equation}\label{independent of n} S_n(x-\beta) - S_{n-j}(x) = S_{j}(x-\beta), \end{equation} which is independent of $n$. Thus for a given $x$ the right hand side of (\ref{independent of n}) is a constant $B>0$, say. Thus for every $n \geq j$ we have \begin{align} \label{LastEq} \Theta_n^\beta(x) & = \frac{S_n(x)}{S_n(x-\beta)} = \frac{S_n(x)-S_{n-j}(x)}{S_n(x-\beta)} + \frac{S_{n-j}(x)- S_n(x-\beta)}{S_n(x-\beta)} + 1\\ & \geq - \frac{B}{S_n(x-\beta)} + 1,\nonumber \end{align} (since $S_{n}(x)-S_{n-j}(x) = S_j(x+(n-j) \alpha )\geq 0$ for every $n \ge j$). It follows that $\liminf \Theta_n^\beta(x) \ge 1$. Similarly to what we have seen in the first part of the proof, as $\alpha\notin\mathbb{Q}$, there exists $C>0$ such that if $\psi(y)>C$, then $\psi(R_\alpha^i(y))<C$ for any $1\le i \le j$. Hence (using \eqref{LastEq} applied to $n=i_k+j+1$), \[ \Theta_{i_k+j+1}^\beta(x) = \frac{S_{i_k+j+1}(x)-S_{i_k+1}(x)}{S_{i_k+j+1}(x-\beta)} + \frac{S_{i_k+1}(x)- S_{i_k+j+1}(x-\beta)}{S_{i_k+j+1}(x-\beta)} + 1,\] which implies that \[\Theta_{i_k+j}^\beta(x) -1 \le \frac{jC - B}{S_{i_k+j}(x-\beta)} \underset{n\to +\infty}{\longrightarrow} 0,\] and that $\liminf \Theta_n^\beta(x) \le 1$. Summing up, one has $\liminf \Theta_n^\beta(x) = 1$. Moreover, from \eqref{EqFinalCOntra2} one also has $\limsup \Theta_n^\beta(x) = +\infty$. The theorem then directly follows from Proposition~\ref{criterium1}. \end{proof} To prove Lemma~\ref{LemEnLeb1}, we use a variation of Fuchs and Kim \cite[Theorem 1.2]{MR3494133}. The initial statement deals with inhomogeneous Diophantine approximation: it gives a criterion under which the orbit of almost any point of the circle under a rigid rotation approaches the origin at a given speed. Its proof consists in a suitable application of a Borel-Cantelli lemma, allowed by Denjoy-Koksma inequality. \begin{theorem}\label{CiteTh12} Let $\varphi(n)$ be a nonnegative sequence and $\alpha$ be an irrational number with principal convergents $p_n/q_n$. For $j\in\mathbb{N}$, denote $n(j)$ the number satisfying $q_{n(j)-1} \le j < q_{n(j)}$. Then, for almost all $x\in\mathbb{R}$, \[\|x+j\alpha \| < \varphi\big(n(j)\big)\] for infinitely many $j\in\mathbb{N}$ if and only if \[\sum_{n=1}^\infty\Big((q_{n}-q_{n-1}) \min\big(\varphi(n),\|q_{n-1}\alpha\|\big)\Big) = \infty.\] \end{theorem} This theorem can be easily adapted from the proof of Fuchs and Kim \cite[Theorem 1.2]{MR3494133}, by cheking that the hypothesis of $\psi$ being decreasing is useless in the case where it is constant equal to $\varphi$ on every interval $[q_n,q_{n+1})$. \begin{proof}[Proof of Lemma~\ref{LemEnLeb1}] Lemma~\ref{LemEnLeb1} lies in an application of Theorem~\ref{CiteTh12}. More precisely, by \eqref{EqLambdaQ}, one has $\|q_{n-1} \alpha\| = \lambda^{(n-1)}\ge \frac{1}{2q_{n}}$. Choose \[\varphi(n) = \begin{cases} \frac{\epsilon_{n}}{q_{n}\log(3q_{n})}\quad & \text{if } a_{n}, a_{n+1}\ge 2\\ 0 & \text{if } a_{n} =1 \text{ or }a_{n+1} =1. \end{cases}\] In particular, if $a_n, a_{n+1} \ge 2$, then $\varphi(n) \le \|q_{n-1} \alpha\|$. We compute \begin{align*} \sum_{\substack{n \ge 1}}\Big((q_{n}-q_{n-1}) \min\big(\varphi(n),\|q_{n-1}\alpha\|\big)\Big) & \ge \sum_{\substack{n \in \mathbb{N}\\ a_n, a_{n+1} \ge 2}}\!\!\! \Big((q_{n}-q_{n-1}) \min\big(\varphi(n),\|q_{n-1}\alpha\|\big)\Big)\\ & \ge \sum_{\substack{n \in \mathbb{N}\\ a_n, a_{n+1} \ge 2}} \frac{q_{n}-q_{n-1}}{q_{n}}\frac{\epsilon_{n}}{\log(3q_{n})}. \end{align*} But as $a_n>1$, one has \[\frac{q_{n}-q_{n-1}}{q_{n}} \ge 1-\frac{1}{a_n} \ge 2,\] so \[\sum_{\substack{n \ge 1}}\Big((q_{n}-q_{n-1}) \min\big(\varphi(n),\|q_{n-1}\alpha\|\big)\Big) \ge \sum_{\substack{n \in \mathbb{N}\\ a_n, a_{n+1} \ge 2}} \frac{\epsilon_{n}}{\log(3q_{n})}.\] Hence, Theorem~\ref{CiteTh12} applies and implies that for almost all $x\in\mathbb{R}$, for any $M\in\mathbb{N}$, there exists $j\ge M$ such that $a_{n(j)}, a_{n(j)+1}\ge 2$ and \[\|x+j\alpha \| < \frac{\epsilon_{n(j)}}{q_{n(j)}\log(3q_{n(j)})},\] in other words that $x\in E$. \end{proof} \section{Physical measures for stopping points on the same orbit}\label{SecPhysSame} \subsection{Statement and ideas of proof} The aim of this section is to provide conditions on $\alpha$ under which flows with stopping points on the same orbit have a physical measure. The simplest situation in which this happens is when the sequence $a_n$ tends to infinity sufficiently fast. \begin{theorem}\label{simple physical measure} Let $\alpha=[a_0; a_1, a_2, \ldots]$ be such that \[\sum_n \frac{1}{\log a_n} < \infty.\] Then any reparameterized linear flow satisfying $(SH)$, with $\mathbf{p}$ and $\mathbf{q}$ in the same orbit, has a unique physical measure, which attracts Lebesgue almost any point, and equal to $\mu_\infty$ (defined in \eqref{EqFormPhys}). \end{theorem} A similar result can be obtained by assuming sufficiently rapid growth of $q_n$. \begin{theorem}\label{PropConv} If $\mathbf{p}$ and $\mathbf{q}$ lie on the same orbit of the flow, and if there exist $C>0$ and $\gamma>0$ such that $q_n \ge C \exp(n^{2+\gamma})$, then the system has a unique physical measure, which attracts Lebesgue almost any point, and equal to $\mu_\infty$ (defined in \eqref{EqFormPhys}). \end{theorem} Theorem~\ref{PropConv} is harder to prove than Theorem~\ref{simple physical measure} because it allows large oscillation in the sequence $a_n$ forcing us to use different estimates depending on whether $a_n$ is small or large. Comparing these statements with Theorem~\ref{PropDivSum}, we can observe that in the case of stopping points on the same orbit, the flow has historic behaviour for Diophantine $\alpha$ and a unique physical measure for sufficiently Liouvillian $\alpha$. It could be counterintuitive at first sight, but one has to keep in mind that in the Liouvillian case, as the orbit of $0$ eventually comes back really close to $0$, it lets space for most of the other points to come back far away from 0. \begin{lemma}\label{lemHaus} The set of numbers $\alpha\in\mathbb{R}$ such that there exist $C>0$ and $\gamma>0$ such that $q_n \ge C \exp(n^{2+\gamma})$ is of zero Lebesgue measure but full Hausdorff dimension. \end{lemma} \begin{proof} This is a direct consequence of a theorem of Jarn{\'i}k Besicovitch: following Durand \cite{Durand}, combining Proposition 1.8 with Theorem 3.1, for any $\tau>2$, the set of $\alpha\in\mathbb{R}$ such that there exists $C>0$ such that $\log q_n < (\tau-1)^n C$ has Hausdorff dimension $\ge 2/\tau$. \end{proof} Let us first explain the idea of the proof of Theorem~\ref{PropConv}. To make it simpler we will suppose that the projections $p_0$ and $q_0$ of respectively $\mathbf{p}$ and $\mathbf{q}$ on the transverse section $\Sigma$ satisfy $q_0 = R_\alpha(p_0)$ (see Section \ref{SecDefFlow}). Using Proposition~\ref{criterium1}, we want to show that for almost any point $\mathbf{x}\in\mathbb{T}^2$ and for any $j$ large enough, the Birkhoff sum for the observable $\|\cdot - p_0\|^{-1}$ over the $j$ first return times of $\mathbf{x}$ on $\Sigma$ for the flow with only one stopping point at $\mathbf{p}$ is more or less the same as the sum of the $j$ first return times for the flow with only one stopping point at $\mathbf{q}$. But as $q_0 = R_\alpha(p_0)$, the difference between these two sums is more or less the value of the last return time of the orbit of $p_0$. Hence, supposing without loss of generality that $p_0=0$, what we want to show is that for almost any $x\in\mathbb{T}$, (see Lemma~\ref{CritPhys}) \[\psi\big(R_\alpha^j(x)\big) = o\left(\sum_{i=0}^{j-1} \psi(R_\alpha^i(x))\right).\] More precisely, we will prove that the measure $\lambda_n$ of the set of points $x\in \mathbb{T}$ for which there exists a time $q_n\le j < q_{n+1}$ such that \begin{equation}\label{EqDiv} \psi(R_\alpha^j(x)) \ge n^{-\gamma/6}\sum_{i=0}^{j-1} \psi(R_\alpha^i(x)) \end{equation} satisfies $\sum_n \lambda_n<+\infty$ (Lemmas~\ref{alternative summability}, \ref{summability under rapid growth} and \ref{LemSerHarmo}). Hence, almost any point of the circle will be eventually ``good'' between times $q_n$ and $q_{n+1}$. To do this, we have to prove that for most of points, the sum on the right of \eqref{EqDiv} is sufficiently large. We separate two different cases for each $n$: \begin{itemize} \item Either $a_n$ is big, that is, $q_{n+1}\gg q_n$ --- note that it has to happen an infinite number of times (otherwise we could not have $q_n \ge C \exp(n^{2+\gamma})$). It will turn out that in this case, the most important contribution for the Birkhoff sum comes from the returns in the ground floor $\Delta^{(n-1)}$, that is, the term $\psi_1(y_0)$ of the first part of Lemma~\ref{EqSellFinal}. In practice, we will cut the ground floor $\Delta^{(n-1)}$ into the $a_n$ ground floors of the sectors (defined page \pageref{Sectors}), and throw away the points that are sufficiently close to the the preimage of 0. The remaining points will not satisfy \eqref{EqDiv}, simply because they will return in the ground floor a lot of times before coming close to 0, which will increase sufficiently the right part of \eqref{EqDiv}. \item Or $a_n$ is small, that is, $q_{n+1}\not\gg q_n$. In this case, the most important contribution for the Birkhoff sum comes from the sums in the whole sectors but the ground floor, that is, the term $\log q_n / \big(2 \lambda^{(n-1)}\big)$ of the first part of Lemma~\ref{EqSellFinal}. The fact that $q_n$ is large enough will imply that the right part of \eqref{EqDiv} is large enough, which will ensure that the proportion of points satisfying \eqref{EqDiv} is small enough. \end{itemize} Of course, these considerations will be made precise in the proof of Theorem~\ref{PropConv}. \begin{figure} \begin{tikzpicture}[scale=.75] \fill[fill=green, opacity=.1] (0,0) rectangle (-2.5,1); \fill[fill=blue, opacity=.1] (0,0) rectangle (13.5,1.5); \fill[fill=green, opacity=.1] (1,0) rectangle (13.5,1.5); \draw (0,0) -- (0,2); \draw (0,0) -- (13.5,0); \draw[color=blue!60!black] (0,.5) -- (13.5,.5); \draw[color=blue!60!black] (0,1) -- (13.5,1); \draw[color=blue!60!black] (0,1.5) -- (13.5,1.5); \draw[color=blue!60!black] (13.5,0) -- (13.5,1.5); \draw (-2.5,0) -- (0,0); \draw[color=green!60!black] (-2.5,.5) -- (0,.5); \draw[color=green!60!black] (-2.5,1) -- (0,1); \draw[color=green!60!black] (-2.5,0) -- (-2.5,1); \draw[color=green!60!black] (1,0) -- (1,1.5); \draw[color=green!60!black] (3.5,0) -- (3.5,1.5); \draw[color=green!60!black] (6,0) -- (6,1.5); \draw[color=green!60!black] (8.5,0) -- (8.5,1.5); \draw[color=green!60!black] (11,0) -- (11,1.5); \foreach \mathbf{x} in {1,...,5} {\draw[dashed, thick, color=red!70!black] (2.5*\mathbf{x},0) -- (2.5*\mathbf{x},1.5);} \draw[dashed, thick, color=red!70!black] (-.5,0) -- (-.5,1); \fill[color=red, opacity=.2] (2.3,0) rectangle (2.7,1.5); \fill[color=red, opacity=.2] (4.8,0) rectangle (5.2,1.5); \fill[color=red, opacity=.2] (7.3,0) rectangle (7.7,1.5); \fill[color=red, opacity=.2] (9.85,0) rectangle (10.15,1.5); \fill[color=red, opacity=.2] (12.4,0) rectangle (12.6,1.5); \fill[color=red, opacity=.2] (-.6,0) rectangle (-.4,1); \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.9cm] (13.5,0) -- (0,0) node [black,midway,yshift=-0.5cm] {$\Delta^{(n-1)}$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.9cm] (0,0) -- (-2.5,0) node [black,midway,yshift=-0.5cm] {$\Delta^{(n)}$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=.2cm] (1,1.5) -- (13.5,1.5) node [black,midway,yshift=0.5cm] {$a_{n+1}$ towers}; \draw[<->] (13.7,0) --node[midway, right]{$q_n$} (13.7,1.5); \draw[<->] (-2.7,0) --node[midway, left]{$q_{n-1}$} (-2.7,1); \draw[<->] (-2.5,-.2) --node[midway, below]{$\lambda^{(n)}$} (0,-.2); \draw[<->] (1,-.2) --node[midway, below]{$\lambda^{(n+1)}$} (0,-.2); \end{tikzpicture} \caption{The pre-orbit of 0 (hatched lines) and the set $D_n$ (light red rectangles). }\label{FigRenor8} \end{figure} \subsection{A criterium for convergence} The following easy lemma gives a sufficient condition for having $\mu_\infty$ (defined in \eqref{EqFormPhys}) as a physical measure. \begin{lemma}\label{CritPhys} If $\mathbf{p}$ and $\mathbf{q}$ are on the same $\phi^t$-orbit, $x$ is not on the $R_\alpha$-orbit of $p_0$ and \[\psi(R_\alpha^j(x)) = o\left(\sum_{i=0}^{j-1} \psi(R_\alpha^i(x))\right),\] then $p\omega(x)$ is equal to $\{\mu_\infty\}$. \end{lemma} \begin{proof} By Proposition~\ref{criterium1}, it suffices to prove that $\Theta_n^\beta(x) \to_n 1$. Recall that \[\Theta_n^\beta(x) = \frac{S_n(x)}{S_n(x-\beta)}.\] As $\mathbf{p}$ and $\mathbf{q}$ are on the same orbit, there exists a section $\Sigma$ and a number $n_0\in\mathbb{N}^*$ such that, writing $p_0$ and $q_0$ as in Paragraph \ref{SecDefFlow}, one has $q_0 = R_\alpha^{n_0}(p_0)$, in other words $\beta \equiv n_0\alpha \mod 1$. It is straightforward to verify that \[S_{n_0}(x)+ S_n(x+\beta) = S_{n+n_0}(x) = S_n(x)+S_{n_0}(R_\alpha^n(x)).\] Hence, \[\Theta_n^\beta \circ R_\alpha^{n_0}(x) = \frac{S_n(x+\beta)}{S_n(x)} = 1 - \frac{S_{n_0}(x)}{S_n(x)} + \frac{S_{n_0}(R_\alpha^n(x))}{S_n(x)}.\] Of course, to show that $\Theta_n^\beta(x) \to 1$ $\boldsymbol{\lambda}$-almost everywhere is equivalent to show that $\Theta_n^\beta \circ R_\alpha^{n_0}(x) \to 1$ $\boldsymbol{\lambda}$-almost everywhere, since $\boldsymbol{\lambda}$ is $R_\alpha$-invariant. As $S_n(x)\to_n +\infty$, the second term tends to 0. So it suffices to prove that the last term also tends to 0. Remark that under the hypothesis of the lemma, one can easily check by recurrence that for any $k\ge 0$, \[\psi\big(R_\alpha^{j+k}(x)\big) = o_j\left(\sum_{i=0}^{j-1} \psi(R_\alpha^i(x))\right),\] so that for any $k_0 \ge 0$, \[\sum_{k=0}^{k_0}\psi\big(R_\alpha^{j+k}(x)\big) = o_j\left(\sum_{i=0}^{j-1} \psi(R_\alpha^i(x))\right),\] in other words \[\frac{S_{n_0}(R_\alpha^n(x))}{S_n(x)} = o_n(1).\] \end{proof} \subsection{Bad sets of initial points} For $n \geq 0$ and $k \geq 1$ we set \[a_{n,k} = \frac{\log k}{\lambda^{(n)}}, \qquad b_{n,k} = \frac{k \log q_n}{\lambda^{(n-1)}},\] and \[c_{n,k} = \max\{a_{n,k}, b_{n,k} \}.\] Note that $c_{n,k}$ is positive as long as $n \geq 2$. Let $u_n$ be an increasing sequence of positive numbers tending to infinity. For $n \geq 2$ and $1 \leq k \leq a_{n+1}$ let \begin{equation}\label{def of I} I_{n,k} = \left[-\frac{u_n}{2 c_{n,k}},\, \frac{u_n}{2 c_{n,k}}\right]. \end{equation} Define the sequences \begin{equation}\label{EqDefNi} n(i) = \max \{n \geq 0: q_n \leq i \} \quad \text{and} \quad k(i) = \max \{k \geq 0: k q_{n(i)} \leq i \}, \end{equation} so that $q_{n(i)} \le i < q_{n(i)+1}$ and $k(i) q_{n(i)} \le i < (k(i)+1)q_{n(i)}$. For $i \geq 0$ let \[B_i = R_\alpha^{-i} (I_{n(i), k(i)}),\] and \begin{equation}\label{EqDefDn} D_n = \bigcup_{i=q_n}^{q_{n+1}-1} B_i . \end{equation} \begin{lemma}\label{LemConvDn2} Let $n \geq 2$. If $x\notin D_n$, then for any $q_n \leq i < q_{n+1}$, and any $x'$ such that $\|x-x'\|\le \lambda^{(n)}$, we have \begin{equation} \label{at time j_0} \psi\big(R_\alpha^i(x)\big) \le \frac{8}{u_n} \sum_{j=0}^{i-1} \psi\big(R_\alpha^j(x')\big). \end{equation} \end{lemma} In particular, we will apply this lemma for $x'=x$. \begin{proof}[Proof of Lemma~\ref{LemConvDn2}] Fix some $q_n \leq i < q_{n+1}$ (so that $n=n(i)$) and let $j_0$ be such that \[\|R_\alpha^{j_0}(x)\| = \min_{q_n \leq j \leq i} \|R_\alpha^j(x)\|.\] Let $y_0 = R_\alpha^{j_0}(x)$. Note that the right hand side of (\ref{at time j_0}) is increasing in $i$. Hence it suffices to check that \begin{equation} \label{enough} \psi\big(R_\alpha^{j_0}(x)\big) \le \frac{8}{u_n} \sum_{j=0}^{j_0-1} \psi\big(R_\alpha^{j}(x)\big). \end{equation} We do that in two different cases. For ease of notation, we write $k_0 = k(j_0)$. \textbf{Case (1):} $\| y_0 \| > \lambda^{(n)}/2$.\\ By definition of $k_0$ we have $j_0 = q_n k_0 + \ell$ for some $0 \leq \ell < q_n$. Let $j_1 = q_n+ \ell$ and denote by $x_0$ the point $R_\alpha^{j_1}(x)$. Then, using the assumption that $\|y_0\|> \lambda^{(n)}/2$, we have \begin{align*} \|x_0 \| & = \|y_0 - (k_0-1) q_n \alpha \| \\ & \leq \|y_0 \| + (k_0-1) \|q_n \alpha\| \\ & = \|y_0 \| + (k_0-1) \lambda^{(n)} \\ & < (2k_0-1) \|y_0\| <2 k_0 \|y_0\| . \end{align*} Hence \[\psi(y_0) < 2k_0 \psi(x_0).\] Since $x \notin D_n$, it follows that $x_0 = R_\alpha^{j_0}(x) \notin I_{n,1}$ and therefore \[\psi(x_0) < 2\frac{\log q_n}{u_n \lambda^{(n-1)}}.\] Thus by Corollary~\ref{kq orbit} we have \[\psi(y_0) < 2k_0 \psi(x_0) < 4k_0\frac{\log q_n}{u_n \lambda^{(n-1)}} < \frac{8}{u_n} \sum_{j=0}^{k_0 q_n - 1} \psi\big(R_\alpha^j(x')\big) \leq \frac{8}{u_n} \sum_{j=0}^{j_0-1} \psi\big(R_\alpha^j(x')\big)\] as required. \textbf{Case (2):} $\|y_0 \| \leq \frac{\lambda^{(n)}}{2}$.\\ It follows from the hypothesis $x \notin D_n$ that \begin{equation} \label{c estimate} \psi(y_0) < 2\frac{c_{n,k_0}}{u_n}. \end{equation} To show (\ref{at time j_0}) we need two estimates. First, from Corollary~\ref{kq orbit} we have \begin{equation} \label{a estimate} \sum_{j=0}^{j_0-1} \psi\left(R_\alpha^{j}(x')\right) \geq \sum_{j=0}^{k_0 q_n-1} \psi\left(R_\alpha^{j}(x')\right) > \frac{k_0 \log q_n}{2 \lambda^{(n-1)}} = \frac{b_{n, k_0}}{2}. \end{equation} Second, from Remark~\ref{lower bound2} (following Lemma~\ref{lower bound}), that can be applied because $\|y_0\|\le \lambda^{(n)}/2$ and $\|x-x'\|\le\lambda^{(n)}$, we obtain the estimate \begin{equation} \label{b estimate} \sum_{i=0}^{j_0-1} \psi(R_\alpha^i(x')) \geq \sum_{i=0}^{k_0 q_n-1} \psi(R_\alpha^i(x')) \geq \frac{\log k_0}{4\lambda^{(n)}} = \frac{a_{n, k_0}}{4}. \end{equation} Putting (\ref{a estimate}) and (\ref{b estimate}) together and comparing with (\ref{c estimate}) \[\frac{8}{u_n} \sum_{j=0}^{j_0-1} \psi\big(R_\alpha^{j}(x')\big) > \frac{2 c_{n,k_0}}{u_n} \ge \psi(y_0),\] again proving that (\ref{enough}) must hold. \end{proof} \subsection{Proof of Theorems \ref{simple physical measure} and \ref{PropConv}} We now turn to the proof of Theorems~\ref{simple physical measure} and \ref{PropConv}. In view of Lemma~\ref{CritPhys} and Lemma~\ref{LemConvDn2}, it suffices to prove that $\boldsymbol{\lambda}$-almost every point $x \in \mathbb{T}$ belongs to $D_n$ for at most finitely many $n$. By virtue of the Borel-Cantelli lemma, this is the case whenever $\boldsymbol{\lambda}(D_n)$ is summable. In other words, Theorems~\ref{simple physical measure} and \ref{PropConv} follow, respectively, from the following two lemmas. \begin{lemma} \label{alternative summability} Suppose that \begin{equation} \label{hyp on a_n} \sum_{n} \frac{1}{\log a_{n}} < \infty. \end{equation} Then it is possible to choose $u_n$ in \eqref{def of I} so that \[\sum_{n \geq 2} \boldsymbol{\lambda}(D_n) < \infty\] \end{lemma} \begin{lemma}\label{summability under rapid growth} Suppose that $\alpha$ is such that $q_n> C \exp(n^{2+\gamma})$ for some $C, \gamma>0$ and every $n$. Then, taking $u_n = n^{\gamma/4}$ in \eqref{def of I}, we have \[\sum_{n\geq 2} \boldsymbol{\lambda}(D_n) < \infty.\] \end{lemma} \begin{proof}[Proof of Lemma~\ref{alternative summability}] Let $u_n$ be an increasing sequence of positive numbers tending to infinity slowly enough so that \begin{equation} \label{summability after multiplication} \sum_{n} \frac{u_n}{\log a_{n}} < \infty \end{equation} and let $D_n$ be defined accordingly as in \eqref{def of I}. We decompose the sum as \begin{equation} \label{decomposition2} \sum_{n \geq 2} \boldsymbol{\lambda}(D_n) \leq \sum_{n \geq 2} q_n \boldsymbol{\lambda}(I_{n,1}) + \sum_{n \geq 2} \sum_{k=2}^{a_{n+1}} \boldsymbol{\lambda}(I_{n,k}) q_n. \end{equation} Recall from Lemma~\ref{properties} that $\lambda^{(n-1)} q_n < 1$. Hence \[\sum_{n \geq 2} q_n\boldsymbol{\lambda} (I_{n,1}) \leq \sum_{k \geq 2} \frac{\lambda^{(n-1)} q_n u_n}{\log q_n} \leq \sum_{k \geq 2} \frac{u_n}{ \log q_n} < \sum_{k \geq 2} \frac{u_n}{ \log a_n}< \infty.\] We turn to the second term in (\ref{decomposition2}). Let \[P_n = \sum_{k=2}^{a_{n+1}} \boldsymbol{\lambda}(I_{n,k}) q_n .\] Using Lemmas~\ref{properties} and \ref{LemSerHarmo} we have \[P_n \leq \sum_{k = 2}^{a_{n+1}} \frac{q_n u_n}{ c_{n,k}} \leq \sum_{k=2}^{a_{n+1}} \frac{q_n u_n}{ a_{n,k}} = \sum_{k=2}^{a_{n+1}} \frac{\lambda^{(n)} u_n}{ \log k} \leq \frac{C a_{n+1} \lambda^{(n)} q_n u_n}{ \log(a_{n+1})} \leq \frac{C u_n}{ \log{a_{n+1}}} \leq \frac{C u_{n+1}}{a_{n+1}}. \] Hence by \eqref{summability after multiplication} we conclude that \[ \sum_{n \geq 2} P_n < \infty.\] \end{proof} \begin{proof}[Proof of Lemma~\ref{summability under rapid growth}] Just as in the proof of Lemma~\ref{alternative summability} we decompose the sum as \begin{equation} \label{decomposition} \sum_{n \geq 2} \boldsymbol{\lambda}(D_n) \leq \sum_{n \geq 2} \boldsymbol{\lambda}(I_{n,1}) + \sum_{n \geq 2} \sum_{k=2}^{a_{n+1}} \boldsymbol{\lambda}(I_{n,k}) q_n. \end{equation} Recall from Lemma~\ref{properties} that $\lambda^{(n-1)} q_n < 1$. Hence \[\sum_{n \geq 2} \boldsymbol{\lambda} (I_{n,1}) \leq \sum_{k \geq 2} \frac{\lambda^{(n-1)} q_n n^{\gamma/4}}{\log q_n} \leq \sum_{k \geq 2} \frac{n^{\gamma/4}}{ \log q_n} < \sum_{n \geq 2} \frac{n^{\gamma/4}}{n^{2+\gamma}} < \infty.\] We turn to the second term in (\ref{decomposition}). Let \[P_n = \sum_{k=2}^{a_{n+1}} \boldsymbol{\lambda}(I_{n,k}) q_n .\] We can bound $P_n$ from above in two ways. On the one hand, using Lemmas~\ref{properties} and \ref{LemSerHarmo} we have \[P_n \leq \sum_{k = 2}^{a_{n+1}} \frac{q_n n^{\gamma/4}}{ c_{n,k}} \leq \sum_{k=2}^{a_{n+1}} \frac{q_n n^{\gamma/4}}{ a_{n,k}} = \sum_{k=2}^{a_{n+1}} \frac{\lambda^{(n)} n^{\gamma/4}}{ \log k} \leq \frac{C a_{n+1} \lambda^{(n)} q_n n^{\gamma/4} }{ \log(a_{n+1})} \leq \frac{C n^{\gamma/4}}{ \log{a_{n+1}}}. \] On the other hand \[P_n \leq \sum_{k = 2}^{a_{n+1}} \frac{q_n n^{\gamma/4}}{ c_{n,k}} \leq \sum_{k=2}^{a_{n+1}} \frac{ q_n n^{\gamma/4}}{ b_{n,k}} \leq \sum_{k=2}^{a_{n+1}} \frac{ q_n \lambda^{(n-1)} n^{\gamma/4} }{ k \log(q_n)} \leq \frac{\log a_{n+1} n^{\gamma/4}}{ \log(q_n)}.\] Let \[\mathcal{A} = \{n \geq 2: a_{n+1} > \exp(n^{1+\gamma/2}) \}\] and \[\mathcal{B} = \{n \geq 2: a_{n+1} \leq \exp(n^{1+\gamma/2}) \}.\] If $n \in \mathcal{A}$ then \[P_n \leq \frac{C\, n^{\gamma/4}}{ \log a_{n+1}} \leq \frac{C\,n^{\gamma/4}}{n^{1+\gamma/2}} = \frac{C}{ n^{1+\gamma/4}}.\] Hence \[\sum_{n \in \mathcal{A}} P_n < \infty.\] If $n \in \mathcal{B}$ then \[P_n \leq \frac{ \log a_{n+1} n^{\gamma/4} }{ \log q_n} \leq \frac{n^{1+\gamma/2} n^{\gamma/4}}{n^{2+\gamma}} = \frac{1}{n^{1+\gamma/4}}.\] Hence \[\sum_{n \in \mathcal{B}} P_n < \infty.\] We conclude the proof by noting that \[\sum_{n \geq 2} \sum_{k=2}^{a_{n+1}} \boldsymbol{\lambda} (I_{n,k}) = \sum_{n \geq 2} P_n = \sum_{n \in \mathcal{A}} P_n + \sum_{n \in \mathcal{B}} P_n < \infty.\] \end{proof} \section{Physical measures for stopping points on different orbits}\label{SecDiff} In this last section, we use the arguments of the proof of Theorem~\ref{PropConv} to get the existence of flows satisfying (SH) with stopping points $\mathbf{p}$ and $\mathbf{q}$ on different orbits and with a physical measure. \begin{theorem}\label{physmeas different orbits} Let $\alpha=[a_0; a_1, a_2, \ldots]$ be such that one of the following conditions holds: \begin{itemize} \item $\sum_n \frac{1}{\log a_n} < \infty$; \item there exist $C>0$ and $\gamma>0$ such that $q_n \ge C \exp(n^{2+\gamma})$. \end{itemize} Let \begin{equation}\label{EqDefBeta0} \beta_0 = \sum_{n\geq 0 } \rho_n. \end{equation} If $\phi^t$ is a reparametrized linear flow with angle $\alpha$ and stopping points at $(0,0)$ and $(0, -\beta_0)$ satisfying (SH), then $\phi^t$ has a physical measure. \end{theorem} Remark that for a fixed $\mathbf{p}$, the set of $\mathbf{q}$ satisfying the conclusion of this theorem is dense (simply by the fact that the positive orbit under the linear flow of such a point $\mathbf{q}$ is dense). We start by proving that the number $\beta_0$ is not in the $R_\alpha$-orbit of 0 whenever $a_n$ is not eventually constant equal to 1 (Lemma~\ref{beta0notorbit}). Hence, Theorem~\ref{physmeas different orbits} gives the existence of reparametrized linear flows with two stopping points which are not on the same orbit and with a unique physical measure; in particular, using Lemma~\ref{lemHaus}, its conclusion holds on a set of full Hausdorff dimension (as the set of points failing to satisfy Lemma~\ref{beta0notorbit} is at most countable). \subsection{The number $\beta_0$\label{SubSecBeta0}} Let $\beta_0$ as in \eqref{EqDefBeta0}. Note that $\beta_0$ can be written as \[\beta_0 = \sum_{k=0}^{n-1} (q_k \alpha - p_k) + \sum_{k=n}^\infty \rho_k = \ell_{n-1} \alpha + \beta_n \mod 1,\] where \[\ell_{n-1} = q_0 + \ldots + q_{n-1} \qquad \text{and} \qquad \beta_n = \sum_{k=n}^\infty \rho_k.\] \begin{lemma} \label{ell versus q} The sequence $\ell_n$ satisfies \begin{enumerate} \item $\ell_{n} < q_{n}+q_{n+1}$ for every $n\geq 0$, and \item $\ell_{n} < q_{n+1}$ for every $n \geq 1$ such that $a_{n+1} \geq 2$. \end{enumerate} \end{lemma} \begin{proof} We prove $(1)$ by induction. Clearly \[\ell_0 = q_0 < q_0 + q_1,\] so $(1)$ holds for $n=0$. Now suppose that $(1)$ hods for $n$. Then, using $q_{n-1} + q_n \leq q_{n+1}$, we obtain \[\ell_{n+1} = \ell_{n}+q_{n+1} < (q_{n}+q_{n+1}) + q_{n+1} \leq q_{n+2}+q_{n+1}.\] In other words, $(1)$ holds for $n+1$ and the proof follows by induction. We now turn to the proof of $(2)$. Fix some $n \geq 1$ with $a_{n+1} \geq 2$. Then (see Lemma~\ref{properties}) $q_{n+1} \geq 2 q_n+ q_{n-1}$. We know from $(1)$ that $\ell_{n-1} < q_{n-1}+q_{n}$. Hence \[\ell_{n} = \ell_{n-1}+ q_{n} < q_{n-1}+2 q_{n} \leq q_{n+1}.\] \end{proof} \begin{lemma} \label{alternating sum} We have $\| \beta_n \| < \lambda^{(n)}$. \end{lemma} \begin{proof} Decreasing alternating series. \end{proof} \begin{lemma}\label{beta0notorbit} Let $\alpha = [a_0; a_1, a_2, \ldots ]$ and suppose that there are infinitely many $n$ such that $a_{n} \geq 2$ (which is true if $\alpha\notin \mathbb{Q}[\sqrt 5]$). Then the point $\beta_0$ is not on the $R_\alpha$-orbit of zero. \end{lemma} \begin{proof} We begin by showing that $\beta_0$ is not on the positive $R_\alpha$-orbit of $0$. To this end, fix some $k \geq 0$ and choose $n$ such that $q_{n+1}>k$ and $k\neq \ell_n$. From (1) of Lemma~\ref{ell versus q}, we have $\ell_n < q_{n+1}+q_n$. Hence $|\ell_n - k| < q_{n+2}$ so that by \eqref{EqContFrac0}, $\|(\ell_n-k) \alpha \| \geq \lambda^{(n+1)}$. Moreover, from Lemma~\ref{alternating sum} we have $\| \ell_n \alpha - \beta_0 \| = \| \beta_{n+1}\| < \lambda^{(n+1)}$. It follows that \[\|k \alpha - \beta_0 \| \geq \| k \alpha - \ell_n \alpha \| - \| \ell_n \alpha - \beta_0 \| > \lambda^{(n+1)}-\lambda^{(n+1)}> 0.\] This shows that $\beta_0 \neq k \alpha \mod 0$. Note that $k \geq 0$ was chosen arbitrarily. Hence $\beta_0$ is not on the positive orbit of zero under $R_\alpha$. \medskip Fix some integer $m<0$ and choose $n$ such that $q_{n} > -m$ and $a_{n+1}\ge 2$. By (2) of Lemma~\ref{ell versus q}, we have $\ell_n < q_{n+1}$ and so $0<\ell_n-m < q_{n+1}+q_{n} \le q_{n+2}$. This implies that $\|(\ell_n-m)\alpha\|\ge \lambda^{(n+1)}$. Hence, \begin{align*} \| m \alpha - \beta_0 \| & \ge \| m \alpha - \ell_n\alpha \| - \|\ell_n\alpha - \beta_0\| \\ & = \|(m- \ell_n ) \alpha \| - \|\beta_{n+1} \| \\ & > \lambda^{(n+1)} - \lambda^{(n+1)} = 0. \end{align*} Since $m<0$ has been taken arbitrarily this shows that $\beta_0$ is not on the negative $R_\alpha$ orbit of 0. \end{proof} \begin{remark} The converse of Lemma \ref{beta0notorbit} is also true: if $\alpha$ is such that $a_n = 1$ for all but finitely many $n$, then $\beta_0$ is on the $R_\alpha$-orbit of $0$. To see why, fix $N$ such that $ a_n = 1$ for every $n \geq N+1$. Then (see Lemma~\ref{properties}) \[\rho_n= \rho_{n-2}-\rho_{n-1}\] for every $n \geq N+1$. So for $n \geq N$ we can write \begin{align*} \sum_{k=0}^n \rho_k &= \sum_{k=0}^N \rho_k + \sum_{k=N+1}^n \rho_k \\ & = \sum_{k=0}^N \rho_k + \sum_{k=N+1}^n (\rho_{k-2}-\rho_{k-1}) \\ & = \sum_{k=0}^N \rho_k + \rho_{N-1} - \rho_{n-1}. \end{align*} Taking the limit $n \to \infty$ we get \[\beta_0 = \sum_{k=0}^\infty \rho_n = \sum_{k=0}^N \rho_k + \rho_{N-1} = (\ell_N + q_{N-1}) \alpha \mod 1.\] Hence $\beta_0$ is on the $R_\alpha$-orbit of $0$. \end{remark} \subsection{Proof of Theorem~\ref{physmeas different orbits}} Without loss of generality one can suppose that $p_0=0$ (where $(x_0,p_0)$ is the point of the section $\Sigma$ corresponding to $\mathbf{p}$, see Paragraph \ref{SecDefFlow}); hence $\beta=q_0$ corresponds to the projection of the point $\mathbf{q}$ on $\Sigma$. Recall that in the previous section we have defined (in \eqref{EqDefDn}) the set $D_n$ of ``bad points''. We now define an alternative version of this set, using an alternative version of \eqref{EqDefNi}: let $u_n$ be as in Lemmas \ref{alternative summability} or \ref{summability under rapid growth} (depending on whether we are in the first or the second hypothesis of Theorem~\ref{physmeas different orbits}), and consider a sequence $(v_n)$ of integers such that \begin{equation}\label{eqpropvn} v_n\underset{n\to\infty}{\longrightarrow}\infty,\qquad \sum_{n\ge 2} v_n \lambda(D_n) < \infty, \qquad \frac{v_{n+1}}{u_n}\underset{n\to\infty}{\longrightarrow}0. \end{equation} Set \begin{equation*} \tilde n(i) = \max \{n \geq 0: v_n q_n \leq i \} \quad \text{and} \quad \tilde k(i) = \max \{k \geq 0: k q_{\tilde n(i)} \leq i \}, \end{equation*} so that $v_{\tilde n(i)} q_{\tilde n(i)} \le i < v_{\tilde n(i) + 1} q_{\tilde n(i)+1}$ and $\tilde k(i) q_{\tilde n(i)} \le i < (\tilde k(i)+1)q_{\tilde n(i)}$. Note that \begin{equation}\label{EqTildeK} v_{\tilde n(i)} \ \le \ \tilde k(i)\ < \ v_{\tilde n(i)+1} \frac{q_{\tilde n(i)+1}}{q_{\tilde n(i)}}. \end{equation} For $i \geq 0$ let (recall that by \eqref{def of I}, one has $ I_{n,k} = \big[-u_n/(2 c_{n,k}),\, u_n/(2 c_{n,k})\big]$) \[\tilde B_i = R_\alpha^{-i} \left(I_{\tilde n(i), \tilde k(i)}\right);\] this allows to define an alternative version of \eqref{EqDefDn} \begin{equation*}\label{EqDefDn2} \tilde D_n = \bigcup_{i=v_n q_n}^{v_{n+1} q_{n+1}\,-1} \tilde B_i, \end{equation*} and \begin{equation}\label{EqD} \tilde D = \bigcap_{N\in\mathbb{N}} \bigcup_{n\ge N} \big(\tilde D_n \cup (\tilde D_n-\beta_0) \cup (\tilde D_n-\ell_{n-1}\alpha) \big). \end{equation} By a trivial adaptation of Lemma~\ref{summability under rapid growth}, this set has null measure (by using \eqref{eqpropvn} and the fact that the measure of the new set $\tilde D_n$ is smaller than $v_n+1$ times the measure of the old $D_n$ defined in \eqref{EqDefDn}). \begin{proposition}\label{LemFinalPhysmeas} Under the hypotheses of Theorem~\ref{physmeas different orbits}, for $\beta_0$ defined by \eqref{EqDefBeta0} and for any $x\notin \tilde D$ (defined in \eqref{EqD}) which is not in the preorbits of $p_0$ or $q_0$, the point $(x_0,x)$ is in the basin of attraction of the measure $\mu_\infty$, where $\mathbf{q}$ corresponds to $(x_0,-\beta_0)\in\Sigma$. \end{proposition} This proposition implies Theorem~\ref{physmeas different orbits}, as the set $\tilde D$ has null measure. We shall use the following adaptation of Lemma~\ref{LemConvDn2}, whose proof is identical. \begin{lemma}\label{LemConvDn3} Let $n \geq 2$. If $x\notin \tilde D_n$, then for any $v_n q_n \leq i < v_{n+1}q_{n+1}$, and any $x'$ such that $\|x-x'\|\le \lambda^{(n)}$, we have \begin{equation*} \psi\big(R_\alpha^i(x)\big) \le \frac{8}{u_n} \sum_{j=0}^{i-1} \psi\big(R_\alpha^j(x')\big). \end{equation*} \end{lemma} \begin{proof}[Proof of Proposition~\ref{LemFinalPhysmeas}] As $x\notin \tilde D$, there exists $N\in\mathbb{N}$ such that \[x\notin \bigcup_{n\ge N} \big(\tilde D_n \cup (\tilde D_n-\beta_0)\cup (\tilde D_n-\ell_n\alpha)\big).\] Using Proposition~\ref{criterium1}, we shall prove that \[S_i(x) \overset{\text{def.}}{=} \sum_{j=0}^{i-1} \psi\big(R_\alpha^j(x)\big) \underset{i\to +\infty}{\sim} \sum_{j=0}^{i-1} \psi\big(R_\alpha^j(x+\beta_0)\big) = S_i(x+\beta_0),\] which is true if \[\frac{\big|S_i(x) - S_i(x+\beta_0)\big|}{S_i(x) + S_i(x+\beta_0)} \underset{n\to+\infty}{\longrightarrow}0.\] Consider $n\ge N$ such that $v_n> 2$, $2q_n\ge v_N q_N$ and that the two returns closest to 0 of the orbit of $x$ of length $2 q_n$ under $R_\alpha$ are of indices bigger than $v_Nq_N$. Take $i$ such that $v_n q_n \le i < v_{n+1} q_{n+1}$ (note that in this case, $n=\tilde n(i)$). Recall that $\beta_0$ can be written as the sum $\beta_0 = \ell\alpha +\beta_n$ with $0\le \ell \le q_n+q_{n-1}\le 2 q_n$ and $|\beta_n| < \lambda^{(n)}$ (note that in this case, $\ell=\ell_{n-1}$). Hence, \begin{equation}\label{EqDivBeta} S_i(x) - S_i(x+\beta_0) = \big(S_i(x) - S_i(x+\ell\alpha)\big) + \big(S_i(x+\ell\alpha) - S_i(x+\ell\alpha+\beta_n)\big). \end{equation} We will treat each element of this sum separately. \begin{lemma}\label{LemSecondTerm9} Under the hypotheses of Proposition~\ref{LemFinalPhysmeas}, \[\max_{v_n q_n \le i < v_{n+1} q_{n+1}} \frac{\big|S_i(x+\ell_{n-1}\alpha) - S_i(x+\ell_{n-1}\alpha+\beta_n)\big|}{S_i(x+\beta_0)} \underset{n\to +\infty}{\longrightarrow}0.\] \end{lemma} \begin{lemma}\label{LemFirstTerm9} Under the hypotheses of Proposition~\ref{LemFinalPhysmeas}, \[\max_{v_n q_n \le i < v_{n+1} q_{n+1}} \frac{\big|S_i(x) - S_i(x+\ell_{n-1}\alpha)\big|}{S_i(x) + S_i(x+\beta_0)} \underset{n\to+\infty}{\longrightarrow}0\] \end{lemma} These two lemma prove Proposition~\ref{LemFinalPhysmeas}. \end{proof} \begin{proof}[Proof of Lemma~\ref{LemSecondTerm9}] In this proof, we suppose that $n$ is odd, the even case being identical. Denote \[\ell=\ell_{n-1},\qquad \overline{x} = x+\ell\alpha \qquad \text{and} \qquad k = \lfloor i/q_n\rfloor,\] so that $k=\tilde k(i)$. Note that $\overline{x} + \beta_n = x+\beta_0$. To begin with, we bound the sum using $\psi_1$ and $\psi_2$ (defined in \eqref{EqDefPsi}): \begin{equation}\label{Cut12} \big|S_i(\overline{x}) - S_i(\overline{x}+\beta_n)\big| \le \bigg|\sum_{j=0}^{i-1} \psi_1(\overline{x}) - \psi_1(\overline{x}+\beta_n)\bigg| +\bigg|\sum_{j=0}^{i-1} \psi_2(\overline{x}) - \psi_2(\overline{x}+\beta_n)\bigg|. \end{equation} We treat the first term of \eqref{Cut12}; we will indicate the changes for the last term when needed. For $0 \le r \le k$, we denote by $j_r$ (resp. $j'_r$) the time corresponding to the closest return to 0 of the orbit of $R_\alpha^{r q_n}(\overline x)$ (resp. $R_\alpha^{r q_n}(\overline x+\beta_n)$) of length $q_n$ in the fundamental domain $[0,1)$. As the orbit of length $i$ is made of at most $k+1$ such pieces of orbits of length $q_n$, by Lemma~\ref{LemFinalMartin}, \begin{align}\label{EqFirstDecomposition} \bigg|\sum_{j=0}^{i-1} \psi_1(\overline{x}) - \psi_1(\overline{x}+&\beta_n)\bigg|\le \sum_{r=0}^k\left|\psi_1\big(R_\alpha^{j_r}(\overline{x})\big) - \psi_1\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big)\right| \\ & + \sum_{r=0}^k\left|\psi_1\big(R_\alpha^{j'_r}(\overline{x})\big) - \psi_1\big(R_\alpha^{j'_r}(\overline{x}+\beta_n)\big)\right| + (k+1) \frac{\lambda^{(n)}q_n}{{\lambda^{(n-1)}}} \nonumber. \end{align} \begin{figure} \begin{tikzpicture}[scale=.65] \fill[fill=green, opacity=.1] (0,0) rectangle (-2.5,1); \fill[fill=green, opacity=.1] (13.5,-.5) rectangle (16,.5); \fill[fill=blue, opacity=.1] (0,0) rectangle (13.5,1.5); \fill[fill=green, opacity=.1] (1,0) rectangle (13.5,1.5); \draw[thick] (0,0) -- (0,2); \draw (-2.5,0) -- (16,0); \draw[color=blue!60!black] (0,.5) -- (13.5,.5); \draw[color=blue!60!black] (0,1) -- (13.5,1); \draw[color=blue!60!black] (0,1.5) -- (13.5,1.5); \draw[color=blue!60!black] (13.5,0) -- (13.5,1.5); \draw[color=green!60!black] (-2.5,.5) -- (0,.5); \draw[color=green!60!black] (-2.5,1) -- (0,1); \draw[color=green!60!black] (-2.5,0) -- (-2.5,1); \draw[color=green!60!black] (1,0) -- (1,1.5); \draw[color=green!60!black] (3.5,0) -- (3.5,1.5); \draw[color=green!60!black] (6,0) -- (6,1.5); \draw[color=green!60!black] (8.5,0) -- (8.5,1.5); \draw[color=green!60!black] (11,0) -- (11,1.5); \draw[color=green!60!black, dashed] (13.5,.5) -- (16,.5); \draw[color=green!60!black, dashed] (13.5,-.5) -- (16,-.5); \draw[color=green!60!black, dashed] (16,-.5) -- (16,.5); \draw[color=green!60!black, dashed] (13.5,0) -- (13.5,-.5); \foreach \mathbf{x} in {1,...,4} {\fill[color=red!80!black] (2.5*\mathbf{x}-.7,0) circle (.1); \fill[color=red!50!white] (2.5*\mathbf{x}-.7,.5) circle (.1); \fill[color=red!50!white] (2.5*\mathbf{x}-.7,1) circle (.1);} \fill[color=black] (2.5-.7,0) circle (.1); \fill[color=black] (5-.7,0) circle (.1); \fill[color=red!50!white] (2.5*5-.7,1) circle (.1); \fill[color=red!50!white] (2.5*5-.7,.5) circle (.1); \draw[color=red!50!white] (2.5*5-.7,.5) circle(.15); \fill[color=black] (2.5*5-.7,.4) node[above left]{$\overline x$}; \fill[color=red!80!black] (-.7,.5) circle (.1); \fill[color=red!50!white] (-.7,0) circle (.1); \draw[color=red!80!black, thick] (15.3,0) circle (.1); \draw[color=red!50!white, thick] (15.3,-.5) circle (.1); \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.9cm] (13.5,0) -- (0,0) node [black,midway,yshift=-0.5cm] {$\Delta^{(n-1)}$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=-.9cm] (0,0) -- (-2.5,0) node [black,midway,yshift=-0.5cm] {$\Delta^{(n)}$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=.2cm] (1,1.5) -- (13.5,1.5) node [black,midway,yshift=0.5cm] {$a_{n+1}$ towers}; \draw[<->] (16.2,0) --node[midway, right]{$q_n$} (16.2,1.5); \draw[<->] (-2.7,0) --node[midway, left]{$q_{n-1}$} (-2.7,1); \draw[<->] (-2.5,-.2) --node[midway, below]{$\lambda^{(n)}$} (0,-.2); \draw[<->] (1,-.2) --node[midway, below]{$\lambda^{(n+1)}$} (0,-.2); \end{tikzpicture} \caption{Piece of positive orbit of the point $\overline x$ (of length \emph{a priori} smaller than $i$). Each set of $q_n-1$ consecutive light red points in a column contributes to the last term of \eqref{EqFirstDecomposition}. The other points (red and black) contribute to the first terms of \eqref{EqFirstDecomposition}, i.e. the points $R_\alpha^{j_r}(\overline x)$. Among the $a_{n+1}$ first of them, at most two (in black) belong to $[0,2\lambda^{(n)}]$ (first term of the second line of \eqref{multfirst}) The other ones (in red) give birth to the second term of the second line of \eqref{multfirst}. In this example, the interval on the right of $\Delta^{(n-1)}$ is some $\Delta^{(n)}_j$ but it also could be some $\Delta^{(n-1)}_j$. The green dashed shifted small tower on the right is the same as the small leftmost tower. }\label{FigRenor666} \end{figure} Let us first bound the last term of \eqref{EqFirstDecomposition}, which corresponds to the pink points of Figure~\ref{FigRenor666}. Recall that by Corollary~\ref{kq orbit}, one has \begin{equation}\label{EqTheLast} S_i(x+\beta_0) \ge k\frac{\log q_n}{2\lambda^{(n-1)}}. \end{equation} By Lemma~\ref{properties}, this gives \begin{equation}\label{EqFirstIziTerm} (k+1)\frac{\lambda^{(n)}q_n}{{\lambda^{(n-1)}} S_i(x+\beta_0)} \le (k+1)\frac{2q_n/q_{n+1}}{k\log q_n} \le \frac{4}{a_{n+1}\log q_n}. \end{equation} For the first term of \eqref{EqFirstDecomposition} (the ground floor in Figure~\ref{FigRenor666}, i.e. the black and red points), we will use the fact that as $x\notin (\tilde D_n-\ell_{n-1}\alpha) \cup (\tilde D_n-\beta_0)$, $\overline x\notin \tilde D_n$ and $\overline{x}+\beta_n \notin \tilde D_n$. So it is possible to apply Lemma~\ref{LemConvDn3}: as $\|\overline x-(\overline x+\beta_n)\| = \|\beta_n\| < \lambda^{(n)}$, for any $r \le k$, one has \begin{equation}\label{EqBoundCloose} \psi\big(R_\alpha^{j_r}(\overline{x})\big) + \psi\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big) \le \frac{16}{u_n} \sum_{j=0}^{i-1} \psi\big(R_\alpha^j(\overline x+\beta_n)\big). \end{equation} We treat two cases separately. \noindent {\bfseries Case 1:} $k\ge a_{n+1}$.\\ In this case, one cuts the sum into pieces of length $a_{n+1}$: \begin{multline*} \sum_{r=0}^k\left|\psi_1\big(R_\alpha^{j_r}(\overline{x})\big) - \psi_1\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big)\right| \\ \le \sum_{m=0}^{\lceil k/a_{n+1}\rceil}\sum_{p=0}^{a_{n+1}-1} \left|\psi_1\big(R_\alpha^{j_{p+a_{n+1}m}}(\overline{x})\big) - \psi_1\big(R_\alpha^{j_{p+a_{n+1}m}}(\overline{x}+\beta_n)\big)\right|. \end{multline*} As can be seen in Figure~\ref{FigRenor666}, and because of the form of the rotation map $R_\alpha$ in the renormalization tower, among $a_{n+1}$ consecutive terms $R_\alpha^{j_r}(\overline x)$, at most one of them does not belong to $\Delta^{(n-1)}$. There are between $a_{n+1}-1$ and $a_{n+1}$ remaining terms, which belong to $\Delta^{(n-1)}$. These remaining terms are made of at most two pieces of orbit of the rotation of angle $\rho_n=q_n\alpha-p_n$, and if there are two of them all the terms of one piece are on the left to all the terms of the other (for the order on the segment $\Delta^{(n-1)}$). One can isolate the points $R_\alpha^{j_{p+a_{n+1}m}}(\overline{x})$ that belong to $[0,2\lambda^{(n)}]$ (there are at most two of them, in black in Figure~\ref{FigRenor666}) -- which will give the first term of \eqref{multfirst} -- from the others, in red in Figure~\ref{FigRenor666} -- which will give the second term of \eqref{multfirst}. Reasoning as in the proof of Lemma~\ref{LemFinalMartin}, one gets \begin{multline}\label{multfirst} \sum_{p=0}^{a_{n+1}-1}\left|\psi_1\big(R_\alpha^{j_{p+a_{n+1}m}}(\overline{x})\big) - \psi_1\big(R_\alpha^{j_{p+a_{n+1}m}}(\overline{x}+\beta_n)\big)\right|\\ \le \frac{32}{u_n} \sum_{j=0}^{i-1} \psi_1\big(R_\alpha^j(\overline x+\beta_n)\big) + \sum_{p=1}^{a_{n+1}} \left(\frac{1}{p\lambda^{(n)}} - \frac{1}{(p+1)\lambda^{(n)}}\right)\\ \le \frac{32}{u_n} S_i(x+\beta_0) + \frac{1}{\lambda^{(n)}}. \end{multline} But by \eqref{EqTildeK} and \eqref{EqTotTime} one has (for $n$ large enough) \[\lceil k/a_{n+1} \rceil \le \frac{a_{n+1}+1}{a_{n+1}} v_{n+1} + 1 \le 3v_{n+1},\] and also, because $k\ge a_{n+1}$, one has $\lceil k/a_{n+1} \rceil \le \frac{2k}{a_{n+1}}$. These two bounds lead to \[\sum_{r=0}^k\left|\psi_1\big(R_\alpha^{j_r}(\overline{x})\big) - \psi_1\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big)\right| \le \frac{96}{u_n} v_{n+1}\, S_i(x+\beta_0) + \frac{2 k}{a_{n+1}\lambda^{(n)}}.\] Using Equation \eqref{EqTheLast} together with \eqref{Eqaeta}, this gives \begin{equation*} \frac{\sum_{r=0}^k\left|\psi_1\big(R_\alpha^{j_r}(\overline{x})\big) - \psi_1\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big)\right|}{S_i(x+\beta_0)} \le \frac{96}{u_n} v_{n+1} + \frac{8}{\log q_n}. \end{equation*} Combined with \eqref{eqpropvn}, this implies that \begin{equation}\label{multfirst'} \frac{\sum_{r=0}^k\left|\psi_1\big(R_\alpha^{j_r}(\overline{x})\big) - \psi_1\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big)\right|}{S_i(x+\beta_0)} \underset{n\to +\infty}{\longrightarrow}0 . \end{equation} The same proof works for $\psi_2$ instead of $\psi_1$, and also for $j'_r$ instead of $j_r$ (second term of \eqref{EqFirstDecomposition}). \medskip \noindent {\bfseries Case 2:} $k < a_{n+1}$.\\ As in the first case, and as in the proof of Lemma~\ref{LemFinalMartin}, one can isolate the points $R_\alpha^{j_{p+a_{n+1}}}(\overline{x})$ that belong to $[0,2\lambda^{(n)}]$ (there are at most two of them, in black in Figure~\ref{FigRenor666}) -- which will give the first term of \eqref{multsecond} -- from the others, in red in Figure~\ref{FigRenor666} -- which will give the second term of \eqref{multsecond}. For this second family of points, let us call $m_0$ the index of the first ground floor interval of length $\lambda^{(n)}$ containing one of these points; in other words the is no point $R_\alpha^{j_{p+a_{n+1}}}(\overline{x})$ of this family in $[0,\lambda^{(n+1)}+2\lambda^{(n)})$ and one in $[\lambda^{(n+1)}+2\lambda^{(n)},\lambda^{(n+1)}+3\lambda^{(n)}]$. In this case, one gets \begin{multline}\label{multsecond} \sum_{r=0}^k\left|\psi_1\big(R_\alpha^{j_r}(\overline{x})\big) - \psi_1\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big)\right|\\ \le \frac{32}{u_n} \sum_{j=0}^{i-1} \psi_1\big(R_\alpha^j(\overline x+\beta_n)\big) + \sum_{p=m_0}^{k} \left(\frac{1}{p\lambda^{(n)}} - \frac{1}{(p+1)\lambda^{(n)}}\right)\\ \le \frac{32}{u_n} S_i(x+\beta_0) + \frac{1}{m_0\lambda^{(n)}}. \end{multline} On the other hand, by the same kind of reasoning, using $k\ge v_n$ and Lemma~\ref{LemSerHarmo}, \begin{align} \nonumber S_i(x+\beta_0) & \ge \sum_{r=0}^{k-1}\psi_1\big(R_\alpha^{j_r}(\overline x + \beta_n)\big)\\ & \ge \sum_{p=m_0}^{m_0+v_n-3} \frac{1}{p\lambda^{(n)}} \ge \frac{1}{\lambda^{(n)}}\log\left(1+\frac{v_n-3}{m_0}\right).\label{EqMinorSiSpec} \end{align} We will also use the following fact, easily coming from the convexity of $\log$: For any $m_0\ge 1$, \begin{equation*} m_0\log\left(1+\frac{v}{m_0}\right) \ge \log\big(1+v\big). \end{equation*} Applying this to \eqref{EqMinorSiSpec} and \eqref{multsecond}, one gets \begin{equation}\label{multsecond'} \frac{\sum_{r=0}^k\left|\psi_1\big(R_\alpha^{j_r}(\overline{x})\big) - \psi_1\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big)\right|}{S_i(x+\beta_0)} \le \frac{32}{u_n} + \frac{1}{\log\big(v_n-2\big)}. \end{equation} The same holds for $j'_r$ instead of $j_r$. \medskip For this second case $k<a_{n+1}$, we also need to treat the case of $\psi_2$. The reader should refer to Figure~\ref{FigRenor6666}. Now, for $0 \le r \le k$, we denote by $j_r$ (resp. $j'_r$) the time corresponding to the closest return to 0 of the orbit of $R_\alpha^{r q_n}(\overline x)$ (resp. $R_\alpha^{r q_n}(\overline x+\beta_n)$) of length $q_n$ in the fundamental domain $(-1,0]$ (which is adapted to the map $\psi_2$). This time, we consider the interval $J$ made of the union of $\Delta^{(n)}$ with the interval of $\xi^{(n)}$ on its left, denoted by $\Delta_\iota^{(n-1)}$ (by the properties of the renormalization procedure, we know that this interval is some $\Delta_j^{(n-1)}$ and not some $\Delta_j^{(n)}$). As for the renormalization interval $\Delta(n)$, the first return map in restriction to $J$ is the rotation of angle $\rho_n = q_n\alpha-p_n$, and the return time is always smaller than $q_n$. This implies that the points $R_\alpha^{j_r}(\overline x)$ form an orbit segment for $R_{\rho_n}$ of length $k<a_{n+1}$ -- in particular, quotienting $J$ by its endpoints to get a circle, the order of the points on this circle corresponds to the order on their indices $r$. \begin{figure} \begin{tikzpicture}[scale=.75] \fill[fill=green, opacity=.1] (13.5,.5) rectangle (16,1.5); \fill[fill=blue, opacity=.1] (0,0) rectangle (13.5,2); \fill[fill=green, opacity=.1] (1,0) rectangle (13.5,2); \draw[color=blue!60!black] (0,.5) -- (13.5,.5); \draw[color=blue!60!black] (0,0) -- (13.5,0); \draw[color=blue!60!black] (0,1) -- (13.5,1); \draw[color=blue!60!black] (0,1.5) -- (13.5,1.5); \draw[color=blue!60!black] (0,2) -- (13.5,2); \draw[color=blue!60!black] (13.5,0) -- (13.5,2); \draw[color=blue!60!black] (0,0) -- (0,2); \draw[color=green!60!black] (13.5,1.5) -- (16,1.5); \draw[color=green!60!black] (13.5,1) -- (16,1); \draw[color=green!60!black] (1,0) -- (1,2); \draw[color=green!60!black] (3.5,0) -- (3.5,2); \draw[color=green!60!black] (6,0) -- (6,2); \draw[color=green!60!black] (8.5,0) -- (8.5,2); \draw[color=green!60!black] (11,0) -- (11,2); \draw[thick] (16,.5) -- (16,2); \draw (0,.5) -- (16,.5); \draw (16,.5) node{$\times$} node[below right]{$0$}; \fill[color=red!50!white] (3.7,1) circle (.1); \draw[color=red!50!white] (3.7,1) circle (.15); \fill[color=black] (3.8,1) node[above right]{$\overline x$}; \fill[color=red!50!white] (3.7,1.5) circle (.1); \fill[color=red!50!white] (1.2,0) circle (.1); \fill[color=red!80!black] (1.2,.5) circle (.1); \fill[color=red!50!white] (1.2,1) circle (.1); \fill[color=red!50!white] (1.2,1.5) circle (.1); \fill[color=black] (15.7,.5) circle (.1); \fill[color=red!50!white] (15.7,1) circle (.1); \fill[color=red!50!white] (13.2,0) circle (.1); \fill[color=black] (13.2,.5) circle (.1); \fill[color=red!50!white] (13.2,1) circle (.1); \fill[color=red!50!white] (13.2,1.5) circle (.1); \fill[color=red!50!white] (10.7,0) circle (.1); \fill[color=yellow] (10.7,.5) circle (.1); \fill[color=red!50!white] (10.7,1) circle (.1); \draw[<-] (14.75,.35) to[bend left] (14.55,-.5) node[below] {$\Delta^{(n)}$}; \draw[<-] (7.5,.35) to[bend right] (7.8,-.5) node[below] {$\Delta_\iota^{(n-1)}$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=0,yshift=.2cm] (1,2) -- (13.5,2) node [black,midway,yshift=0.5cm] {$a_{n+1}$ towers}; \draw[<->] (-.2,0) --node[midway, left]{$q_n$} (-.2,2); \draw[<->] (16.3,.5) --node[midway, right]{$q_{n-1}$} (16.3,1.5); \draw[<->] (1,-.2) --node[midway, below]{$\lambda^{(n)}$} (3.5,-.2); \draw[<->] (1,-.2) --node[midway, below]{$\lambda^{(n+1)}$} (0,-.2); \end{tikzpicture} \caption{Adaptation of Figure~\ref{FigRenor666} for $\psi_2$: we consider the large interval $\Delta_\iota^{(n-1)}$ on the left of $\Delta^{(n)}$ (the interval in black inside the large tower), which lies inside the tower above $\Delta^{(n-1)}$ (the bottom interval of the left tower). Among the positive orbit of $\overline x$, there are four types of points: the pink ones that give contribution to the last term of \eqref{EqFirstDecomposition}, the red, the yellow and the black ones. There are at most $v_{n+1}$ black ones; the red and the yellow ones are treated as in the proof for $\psi_1$. }\label{FigRenor6666} \end{figure} Among these points, we isolate the $v_n$ ones that are the closest to 0 (in black in Figure~\ref{FigRenor6666}); we denote $B$ the set of indices of these points. It separates the remaining points $R_\alpha^{j_r}(\overline x)$ into (at most) two orbit segments of $R_{\rho_n}$ in $J$, in red and yellow in Figure~\ref{FigRenor6666}. Let us denote $R$ and $Y$ the sets of indices of these orbit segments. For the black points, using \eqref{EqBoundCloose}, one has \[\sum_{r\in B}\left|\psi_2\big(R_\alpha^{j_r}(\overline{x})\big) - \psi_2\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big)\right| \le \frac{16 v_n}{u_n}S_i(\overline x+\beta_n).\] For the red points, if the set $R$ has cardinality smaller than $v_n$, the same estimate holds. If not, then the proof strategy of \eqref{multsecond} and \eqref{EqMinorSiSpec} works identically. The same is true for the yellow points. Finally, this proves that \[\frac{\sum_{r=0}^k\left|\psi_2\big(R_\alpha^{j_r}(\overline{x})\big) - \psi_2\big(R_\alpha^{j_r}(\overline{x}+\beta_n)\big)\right| }{S_i(\overline x+\beta_n)} \underset{n\to +\infty}{\longrightarrow}0.\] \medskip Combining it with \eqref{EqFirstDecomposition} with \eqref{EqFirstIziTerm}, \eqref{multfirst'} and \eqref{multsecond'}, we deduce that \[\frac{\big|S_i(\overline{x}) - S_i(\overline{x}+\beta_n)\big|}{S_i(x+\beta_0)} \underset{n\to +\infty}{\longrightarrow}0.\] \end{proof} \begin{proof}[Proof of Lemma~\ref{LemFirstTerm9}] We now treat the first term of \eqref{EqDivBeta}. Remark that the number $\ell=\ell_n$ depends on $n$, so results of Section \ref{SecPhysSame} do not apply directly. So one wants to compare $|S_i(x) - S_i(x+\ell\alpha)|$ with $S_i(x)+S_i(x+\beta_0)$. Let us compute (using $\ell < 2q_n$ and $i\ge v_n q_n$ with $v_n\ge 4$): \begin{align*} S_i(x) - S_i(x+\ell\alpha) & = \sum_{j=0}^{i-1} \left(\psi(R_\alpha^j(x)) - \psi(R_\alpha^j(x+\ell\alpha))\right)\\ & = \sum_{j=0}^{i-1} \left(\psi(R_\alpha^j(x)) - \psi(R_\alpha^{j+\ell}(x))\right)\\ & = \sum_{j=0}^{\ell-1} \psi(R_\alpha^j(x)) - \sum_{j=i}^{i+\ell-1} \psi(R_\alpha^j(x)). \end{align*} Hence, \[\big|S_i(x) - S_i(x+\ell\alpha)\big| \le S_{\ell}(x) + S_{\ell}\big(R_\alpha^{i}(x)\big).\] By Lemma~\ref{EqSellFinal}, one has (using $\ell\le 2q_n$) \[S_{\ell}(x) \le \frac{8\log q_n}{\lambda^{(n-1)}} + \psi(y_0) + \psi(y_1),\] where $y_0$ and $y_1$ are the two closest returns to $0$ of the orbit of $x$ of length $2q_n$, and similarly \[S_{\ell}\big(R_\alpha^{i}(x)\big) \le \frac{8\log q_n}{\lambda^{(n-1)}} + \psi(y'_0) + \psi(y'_1),\] with $y_0'$ and $y_1'$ the two closest returns to $0$ of the orbit of $R_\alpha^{i}(x)$ of length\footnote{Note that here the length of the orbit in which we choose the closest points is smaller, but the proof of the Lemma~\ref{EqSellFinal} works identically in this case. } $\ell$. Recall that by the choice of $n\ge N$ large enough, denoting $y_0 = x+m_0\alpha$ and $y_1 = x+m_1\alpha$, one has $v_Nq_N \le m_0,m_1 \le \ell <i$. So $\tilde n(m_0), \tilde n(m_1) \ge N$. As $x\notin \tilde D_{\tilde n(m_0)} \cup \tilde D_{\tilde n(m_1)}$, by Lemma~\ref{LemConvDn3} (and the fact that $(u_n)$ is increasing), \[\psi(y_0) \le \frac{8}{u_{\tilde n(m_0)}} \sum_{j=0}^{m_0-1} \psi\big(R_\alpha^j(x)\big) \le \frac{8}{u_n} \sum_{j=0}^{i-1} \psi\big(R_\alpha^j(x)\big),\] and the same for $y_1$. We now treat the points $y_0'$ and $y_1'$. Note that they are the two returns closest to 0 of the orbit of $R_\alpha^{i-\ell}(\overline x)$ of length $\ell$ (recall that $\overline{x}=x+\ell\alpha$). Let us denote $y_0'=\overline x+m'_0\alpha$ and $y_1'=\overline x+m'_1\alpha$, so that $v_Nq_N \le 2 q_n \le m'_0,m'_1 < i$. As before, as $\overline x\notin \tilde D_{\tilde n(m_0)} \cup \tilde D_{\tilde n(m_1)}$, by Lemma~\ref{LemConvDn3}, \[\psi(y'_0) \le \frac{8}{u_{\tilde n(m'_0)}} \sum_{j=0}^{m'_0-1} \psi\big(R_\alpha^j(\overline x)\big) \le \frac{8}{u_n} \sum_{j=0}^{i-1} \psi\big(R_\alpha^j(\overline x)\big),\] and the same for $y'_1$. Using Lemma~\ref{LemSecondTerm9} (which tells that $S_i(\overline x)$ is more or less equal to $S_i(x+\beta_0)$), we deduce that for all $n$ large enough, \[\psi(y_0'), \psi(y_1') \le \frac{16}{u_n} S_i(x+\beta_0).\] Putting this bound together with \eqref{EqTheLast}, one gets (using $k\ge v_n$, by \eqref{EqTildeK}) \begin{align*} \frac{\left|S_i(x) - S_i(x+\ell\alpha)\right|}{S_i(x) + S_i(x+\beta_0)} & \le 4\frac{8\log q_n}{\lambda^{(n-1)}}\frac{2\lambda^{(n-1)}}{k\log q_n} + \frac{48}{u_n}\\ & \le \frac{64}{k} + \frac{48}{u_n}\\ & \le \frac{64}{v_n} + \frac{48}{u_n}. \end{align*} Note that this is in this part we use the $v_n$ factor introduced specifically for this proof. \medskip Putting all these estimates together, and using \eqref{eqpropvn}, one gets \[\frac{\big|S_i(x) - S_i(x+\beta_0)\big|}{S_i(x) + S_i(x+\beta_0)} \underset{n\to+\infty}{\longrightarrow}0.\] \end{proof} \bibliographystyle{plain}
proofpile-arXiv_059-15746
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} With advances in neural machine learning \cite{sutskever2014sequence,gehring2017convolutional,vaswani2017attention} and availability of the huge amount of human conversations on social media \cite{adiwardana2020towards}, building an open domain dialogue system with data-driven approaches has attracted increasing attention from the community of artificial intelligence and natural language processing. In this work, we are interested in generative approaches. Generative models for open domain dialogues are notorious for replying with generic and bland responses, resulting in meaningless and boring conversations \cite{li2015diversity}. Such deficiency is particularly severe when human participants attempt to dive into specific topics in conversation \cite{dinan2018wizard}. As a result, there is still a big gap between conversation with existing systems and conversation with humans. \begin{table}[] \footnotesize \begin{tabular}{c|p{150pt}} \hline \multicolumn{2}{c}{Context} \\ \hline \multicolumn{1}{c|}{A} & I just discovered star trek and I really like watching star trek . \\ \multicolumn{1}{c|}{B} & Gene Roddenberry created it based upon science fiction and it is American media. \\ & ... \\ \multicolumn{1}{c|}{A} & If I remember Captain Kirk was not the original captain . \\ \multicolumn{1}{c|}{B} & The Star Trek Canon of the series an animated had 5 spin offs. \\ \multicolumn{1}{c|}{A} & I watched a little of the next generation but could not get into it like i did with the original show . \\ \hline \multicolumn{2}{c}{Response} \\ \hline \multicolumn{1}{c|}{Human} & These adventures went on but were short lived and six feature films. \\ \hline \multicolumn{1}{c|}{DialoGPT} & I think it's worth it. \\ \hline \end{tabular} \caption{An example from the test set (Test Seen) of Wizard of Wikipedia \cite{dinan2018wizard} .} \label{tab:intro} \end{table} Very recently, there emerge two lines of research that seem promising to bridge the gap. One is to apply large-scale pre-trained language models, such as GPT-2 \cite{radford2019language}, to the task of open domain dialogue generation. Prototypes such as DialoGPT \cite{zhang2019dialogpt} have exhibited compelling performance on generating responses that make sense under conversation contexts and at the same time carry specific content for keeping the conversation going. While the giant language models can memorize enough patterns in language during pre-training, they only capture ``average’’ semantics of the data \cite{zhang2019dialogpt}. As a result, responses could still be bland or inappropriate when specific knowledge is required, as illustrated by the example in Table \ref{tab:intro}. The other line is to ground dialogue generation by extra knowledge such as unstructured documents \cite{zhao2020low}. By the means, the documents (e.g., wiki articles) serve as content sources, and make a dialogue system knowledgeable regarding to a variety of concepts in discussion. However, collecting enough dialogues that are naturally grounded on documents for model training is not trivial. Although some benchmarks built upon crowd-sourcing have been released by recent papers \cite{zhou2018dataset,dinan2018wizard,gopalakrishnan2019topical}, the small training size makes the generation models generalize badly on unseen topics \cite{dinan2018wizard} and the cost of building such data also prevents from transferring the techniques proved on the benchmarks to new domains and new languages. Encouraged by the results on pre-training for dialogue generation and knowledge-grounded dialogue generation, and motivated by the problems in both sides, we consider bringing the two together in this work. Specifically, we propose knowledge-grounded dialogue generation with pre-trained language models in order to endow a generative model with both rich knowledge and good generalization ability\footnote{In this paper, we assume that knowledge is retrieved from documents.}. The challenge is that pre-trained language models often set constraints on the maximum number of tokens they can handle (e.g., the maximum number for GPT-2 \cite{radford2019language} is $1024$), and thus hinders exploitation of the knowledge text which could be rather long and redundant (e.g., in Wizard of Wikipedia \cite{dinan2018wizard}, on average each conversation context is associated with $61.2$ sentences retrieved from wiki articles, and the average number of tokens in the extra knowledge is $1625.6$). Indeed, the conflict between model capacity and the ability required for processing long knowledge input represents an essential obstacle for applying pre-trained language models to knowledge-grounded dialogue generation, since on the one hand we always have to set up an upper bound to the capacity of pre-trained models in order to handle massive text corpus, and on the other hand we need to keep sufficient candidates with rich enough content in the procedure of response generation in order to guarantee the recall of relevant knowledge. To overcome the challenge, we consider equipping the pre-trained response generation model with a knowledge selection module whereby the redundant knowledge input is slimmed with relevant information (regarding to conversation contexts) kept to meet the capacity constraint. While some recent papers on knowledge-grounded dialogues have paid attention to the problem of knowledge selection \cite{lian2019learning,kim2020sequential,ren2019thinking}, the knowledge selection module is either deeply coupled with the specially configured models \cite{lian2019learning,ren2019thinking} and thus is incompatible with the pre-trained language models, or it is learned with human annotations \cite{dinan2018wizard,kim2018semantic} which are difficult to obtain in practice (e.g., the dataset in \cite{zhou2018dataset} does not contain annotations for knowledge selection). Therefore, we propose an unsupervised approach where learning of knowledge selection and fine-tuning of response generation are jointly conducted with unlabeled dialogues. Specifically, we build the knowledge selection module on the basis of BERT, and formalize knowledge selection as a sequence prediction process, by which the model can take advantage of the pre-training techniques and dynamically determine the relevant knowledge for a given context. The learning algorithm starts from training with pseudo ground-truth that is constructed by making full use of responses as an alternation of human annotations, and then alternatively updates the knowledge selection model and the response generation model through a reinforcement learning approach and a curriculum learning approach respectively. Thus, knowledge selection is further optimized with the feedback from response generation, and the knowledge used for fine-tuning the response generation model gradually moves from the pseudo ground-truth to the prediction of the knowledge selection module. We test the proposed method on two benchmarks of knowledge-grounded dialogue generation: Wizard of Wikipedia \cite{dinan2018wizard} and CMU Document Grounded Conversations \cite{zhou2018dataset}. Evaluation results indicate that our model can significantly outperform state-of-the-art methods as well as a few pre-trained models used in heuristic ways, and thus achieves new state-of-the-art on the benchmarks. Moreover, as a byproduct, the knowledge selection module also outperforms the state-of-the-art model in terms of accuracy of knowledge selection on Wizard of Wikipedia, implying that other models could also benefit from the component. Our contributions in this paper are three-fold: (1) proposal of a knowledge selection module for applying pre-trained language models to the task of knowledge-grounded dialogue generation; (2) proposal of an unsupervised approach in which learning of knowledge selection and fine-tuning of the pre-trained model are conducted in a joint manner; and (3) empirical verification of the effectiveness of the proposed method on benchmarks of knowledge-grounded dialogue generation. \section{Related Work} Early work on end-to-end open domain dialogue generation is inspired by the research of machine translation \citep{ritter2011data,shangL2015neural,vinyals2015neural}. Later, the vanilla encoder-decoder architecture is widely extended to improve diversity of responses \cite{li2015diversity,xing2017topic,zhao2017learning,tao2018get}; to model the structure of conversation contexts \cite{serban2016building,serban2017hierarchical,xing2017hierarchical,zhang2019recosa}; to control attributes of responses \cite{xu2019neural,zhou2017emotional,zhang2018learning,wang2018learning,see2019makes}; and to bias responses to some specific personas \cite{li2016persona,zhang2018personalizing}. Recently, grounding dialogue generation by extra knowledge is emerging as an important step towards human-like conversational AI, where the knowledge could be obtained from knowledge graphs \cite{zhou2018commonsense,moon2019opendialkg,tuan2019dykgchat}, retrieved from unstructured documents \cite{dinan2018wizard,lian2019learning,zhao2020low,kim2020sequential}, or extracted from visual background \cite{mostafazadeh2017image,shuster2018engaging,huber2018emotional}. In this work, we study document-grounded dialogue generation. Rather than learning from scratch like most existing work, we take advantage of the pre-trained language models and achieve new state-of-the-art on the benchmarks of the task. Big, deep neural language models pre-trained on huge unlabeled text corpus have led to strong improvements on numerous natural language understanding and natural language generation benchmarks \cite{devlin2018bert,yang2019xlnet,liu2019roberta,radford2019language,song2019mass,dong2019unified,lewis2019bart}, and therefore are revolutionizing almost the full spectrum of NLP applications \cite{raffel2019exploring,sun2019utilizing,qiao2019understanding,zhang2019hibert,lample2019cross} and some interdisciplinary applications in NLP and computer vision \cite{lu2019vilbert,su2019vl,sun2019videobert}. In the context of dialogue generation, by fine-tuning GPT-2 \cite{radford2019language} in different sizes on social media data, recent work has \cite{zhang2019dialogpt,wolf2019transfertransfo} shown promising progress on conversation engagement and commonsense question-answering. In this work, we further explore the application of pre-training to the task of open domain dialogue generation by equipping the pre-trained language models with external knowledge. Different from a very recent paper on pre-training for low-resource knowledge-grounded dialogue generation \cite{zhao2020low}, the work presents an in-depth investigation on how to release the power of the existing pre-trained language models on the task when input exceeds the capacity of the models. \section{Preliminary} \subsection{Problem Formalization} Suppose that we have a dataset $\mathcal{D} = \{(U_i, D_i, r_i)\}_{i=1}^N$, where $\forall i \in \{1,\ldots,N\}$, $U_i$ is a dialogue context, $D_i$ is a document that contains relevant knowledge regarding to $U_i$, and $r_i$ is a response to $U_i$ based on $D_i$. The goal is to learn a generation model $P(r|U,D;\theta)$ ($\theta$ denotes the parameters of the model) from $\mathcal{D}$, and thus given a new dialogue context $U$ associated with a document $D$, one can generate a response $r$ following $P(r|U,D; \theta)$. \subsection{Pre-trained Language Models} We define $P(r|U,D; \theta)$ on the basis of GPT-2 from OpenAI \cite{radford2019language}. GPT-2 are transformer language models with a stack of masked multi-head self-attention layers, and are learned from large scale web text. To apply GPT-2 to the task of knowledge-grounded dialogue generation, we formulate the generation problem as \begin{equation} \begin{aligned} P(r|U,D; \theta)&=P(r|g(U,D); \theta) \\ &=\prod_{t=1}^{l_r} P(r_{t}|g(U,D), r_{1:t-1}; \theta), \end{aligned} \end{equation} where $g(U,D)$ tailors $U \cup D$ to meet the length constraint of a GPT-2 model as the input of generation, and $r_t$ refers to the $t$-th token of $r$ whose length is supposed to be $l_r$. The problem then boils down to (1) how to define $g(U,D)$; and (2) how to fine-tune $\theta$ (and probably learn $g(U,D)$) with $\mathcal{D}$. In this work, we assume that labels that indicate the ground-truth knowledge are not available, which is practical but makes the problem even more challenging. Since $D$ could be rather redundant with a lot of information irrelevant with the topic or the context of the conversation, simply truncating the concatenation of sentences of $U$ and $D$ as $g(U,D)$ may cut the relevant knowledge and introduce noise into response generation, which hurts the performance of the GPT-2 model, as will be demonstrated in the experiments. Therefore, we consider learning a $g(U,D)$ that can distill useful information from $D$ for the GPT-2 model, as will be elaborated in the next section. \section{Approach} Heading for learning a $g(U,D)$ for applying GPT-2 to the task of knowledge-grounded dialogue generation, we need to deal with several challenges: (1) how to model the correlation between a context and the external knowledge; (2) how to learn $g(U,D)$ when labels of ground-truth knowledge are absent; and (3) how to jointly optimize $g(U,D)$ and the GPT-2 model with $\mathcal{D}$, and thus the two can boost each other. Figure \ref{fig:one} illustrates the architecture of the model. On the basis of the transformer architecture, the knowledge selection module is made up of a context-aware knowledge encoder and a sequential knowledge selector. The former captures interaction patterns between a context $U$ and each sentence in $D$ through a stack of self-attention layers, and the patterns are then fed to the latter to decode useful knowledge one sentence per step. Since human annotations are not accessible, the learning method begins with pseudo ground-truth constructed by making full use of responses, and optimization of $g(U,D)$ and optimization of the GPT-2 generation model are alternatively conducted with a reinforcement learning approach and a curriculum learning approach respectively. \subsection{Context-Aware Knowledge Encoder} We choose BERT \cite{devlin2018bert} as the backbone of the encoder. Thus, the encoder can take advantage of pre-training, and the multi-layer bi-directional attention mechanism in BERT allows a dialogue context and the associated knowledge to sufficiently interact with each other, resulting in context-aware knowledge representations. Specifically, let $U=(u_{1},\ldots, u_{n})$ and $D=(d_{1}, \ldots, d_{m})$ be the context and the knowledge respectively, then we concatenate $\{u_{i}\}_{i=1}^{n}$ as $(w_{1}^u, \cdots, w_{l_u}^u)$ with $w_{i}^u$ the $i$-th word and $l_u$ the length of the sequence, and define the input of the encoder as $\mathcal{S}=(S_1,\ldots, S_m)$ with $S_i$ formulated as \begin{equation} S_i\!=[\mathrm{CLS}]\!w_{1}^u\!\ldots\!w_{l_u}^u\![\mathrm{SEP}]\!w_{i,1}^d\!\ldots\!w_{i,j}^d\!\ldots\!w_{i,l_d}^d\![\mathrm{SEP}], \end{equation} where $w_{i,j}^d$ refers to the $j$-th word of $d_i \in D$, and $l_d$ is the length of $d_i$. Each $S_i \in \mathcal{S}$ passes through the stacked self-attention layers, and is finally represented as $e_{i} = \mathrm{CLS}(\mathrm{BERT}(S_i))$ where $\mathrm{BERT}(S_i)$ refers to the sequence of vectors from the last layer of the encoder and $\mathrm{CLS}(\cdot)$ is a function that returns the first vector of the sequence (i.e., the vector corresponding to the $[\mathrm{CLS}]$ token). The output of the encoder is given by $E=(e_{1}, \ldots, e_{m})$. \subsection{Sequential Knowledge Selector}\label{KS} With $E$ as input, the sequential knowledge selector determines a subset of $D$ (denoted as $D^{\prime}$) as the relevant knowledge and exploits $D^{\prime}$ to construct $g(U, D)$. Since there may exist one-to-many relations between a context and the relevant knowledge \cite{kim2020sequential}, the size of $D^{\prime}$ could vary from context to context. Therefore, we regard the construction of $D^{\prime}$ as a sequence prediction process in which $D^{\prime}$ starts from an empty set and gradually expands by adding one sentence from $D$ per step. By this means, the size of $D^{\prime}$ can also be viewed as a parameter and is dynamically determined according to the given context. Formally, we maintain a sequence of hidden states $\{s_t\}_{t=0}^{T_{U, D}}$ with the initial state $s_0$ a trainable parameter, and weight $\{d_i\}_{i=1}^m$ by an attention mechanism which can be formulated as \begin{equation} \begin{aligned} &P(d_i | U, d_{j_{1:t-1}})=\exp (\alpha_{t, i}) / \sum_{i} \exp (\alpha_{t, i}) \\ &\alpha_{t, i}=v^{\top}\tanh(W_{e}e_{i}+W_{s}s_{t}+b), \label{sks} \end{aligned} \end{equation} where $W_{e}$, $W_{s}$, $b$ and $v$ are trainable parameters. Then $d_{j_t}$ will be added to $D^{\prime}$ if $j_t = \operatorname{argmax}_{i\in \{1,\ldots,m\}}P(d_i | U, d_{j_{1:t-1}})$. After that, $s_{t+1}$ is calculated by \begin{equation} s_{t+1}=\operatorname{LSTM}(e_{j_t}, s_{t}) \end{equation} To determine $T_{U,D}$, we introduce a special embedding $e_{spe}$ into $E$, and terminate the prediction process if $e_{spe}$ is selected or an upper bound $T_{max}$ is reached. Finally, $g(U, D)$ is defined as the concatenation of the sentences in $U \cup D^{\prime}$. \begin{algorithm*} \small \begin{algorithmic}[1] \State {\bfseries Input:} Training data $\mathcal{D}$, pre-trained GPT-2, initial curriculum rate $p_0$, exponential decay constant $\lambda$, maximum step $M$. \State Construct $\mathcal{D}_{K}$ and $\mathcal{D}_{G}$. \State Optimize $g(U, D)$ and GPT-2 using MLE on $\mathcal{D}_K$ and $\mathcal{D}_G$ respectively. \For {$m \gets 1 \mbox{~to~} M$} \State Sample a mini-batch $\{(U_i, D_i, r_i)\}$ from $\mathcal{D}$. \State Update the parameters of $g(U, D)$ based on Eq.\ref{eq:ks}. \Comment {the Reinforcement Step.} \State Sample $\{z_{i}\}$ from a Bernoulli distribution parameterized by $p$, where $p=p_{0} e^{-\lambda m}$. \State Update the parameters of the GPT-2 model based on Eq.\ref{eq:gpt2}. \Comment {the Curriculum Step.} \EndFor \State {\bfseries return} $g(U, D)$ and GPT-2. \end{algorithmic} \caption{Optimization Algorithm} \label{algo} \end{algorithm*} \subsection{Learning Method} Learning a $g(U, D)$ without human annotations is not trivial. For example, in a recent paper \cite{kim2020sequential}, when human labels are removed, the accuracy of knowledge selection drops from $27$\% to $0.3$\%. Moreover, since knowledge selection and response generation are entangled, ideally we hope $g(U,D)$ and the GPT-2 model can enhance each other in learning. However, as the parameters of $g(U,D)$ are far from optimal at the early stage, it is very possible that noise from $g(U,D)$ will be fed to the GPT-2 model and then flows back to the learning procedure of $g(U,D)$, resulting in inferior models on both sides. To cope with the challenges, we propose a joint optimization strategy with weak supervision as follows. The learning algorithm is summarized in Algorithm \ref{algo}. \paragraph{Pseudo Ground-Truth Construction.} \label{sec:pseudo} To alleviate error accumulation in joint optimization, we consider constructing weak supervision and utilize the signals to warm up the learning of $g(U, D)$ and the fine-tuning of GPT-2 beforehand. The intuition is that responses from humans carry clues to relevance of the knowledge candidates, and thus can be used to construct pseudo ground-truth. To be specific, we first sort $D = \{d_{t}\}_{t=1}^m$ in a descending order as $\{d_{j_t}\}_{t=1}^m$ according to $\{\operatorname{Sim}(d_t, r)\}_{t=1}^m$ where $\operatorname{Sim}(\cdot, \cdot)$ denotes a similarity function, and then build a subset of $D$ by \begin{equation} \begin{aligned} \bar{D} &= \{d_{j_1},\ldots, d_{j_{\bar{m}}}\}, \\ \bar{m} &= \operatorname{argmax}_{t}(\operatorname{Sim}(d_{j_{1:t}}, r)), \end{aligned} \end{equation} where $d_{j_{1:t}}$ refers to the concatenation of $\{d_{j_i}\}_{i=1}^t$. With $\bar{D}$, $g(U, D)$ and the GPT-2 model are optimized via maximum likelihood estimation (MLE) on $\mathcal{D}_{K} = \{(U_i, D_i, \bar{D}_i)\}_{i=1}^N$ and $\mathcal{D}_{G} = \{(U_i, \bar{D}_i, r_i)\}_{i=1}^N$ respectively. \paragraph{Joint Optimization: the Reinforcement Step.} We exploit the policy-gradient method \cite{sutton2000policy} to continue-train $g(U,D)$ by which $g(U,D)$ is further ``supervised'' by the GPT-2 model and is directly optimized for a target metric (e.g., F1 in the experiments). Specifically, we sample a $\tilde{D}$ according to $P(d_i | U, d_{j_{1:t-1}})$ (in Eq.\ref{sks}.) under a termination criterion similar to $\bar{D}$ at each time step, and define the loss function as \begin{equation} \begin{aligned} \mathcal{L}_K &= -\frac{1}{N}\sum_{i=1}^{N}\left(\tilde{R}_i\sum_{t=1}^{\left|\tilde{D}_i\right|}\log P(d_{i,j_t} | U_i, d_{i,j_{1:t-1}})\right), \\ \tilde{R}_i &= R(\tilde{D}_i)- b, \label{eq:ks} \end{aligned} \end{equation} where $R(\tilde{D}_i)=\operatorname{Sim}(r'_i, r_i)$ with $r'_i$ the response generated by the GPT-2 model given $U_i$ and $\tilde{D}_i$, and $b=\sum_{i=1}^{N} R(\tilde{D}_{i}) / N$ is the baseline that is used to reduce the variance of gradient estimation\cite{clark2016deep}. We can see that minimizing $\mathcal{L}_K$ is equivalent to maximizing the conditional likelihood of $\tilde{D}_i$ if it obtains a higher reward than the baseline. \paragraph{Joint Optimization: the Curriculum Step.} Though $g(U,D)$ has been pre-trained with the pseudo ground-truth $\bar{D}$, the relevant knowledge provided by the model (i.e., $D^{\prime}$) may still be worse than $\bar{D}$ at the beginning of fine-tuning. Therefore, we mix $D^{\prime}$ and $\bar{D}$ and exploit a curriculum learning strategy to fine-tune the GPT-2 model where $D^{\prime}$ and $\bar{D}$ are regarded as hard materials and easy materials respectively and fine-tuning gradually moves from $\bar{D}$ to $D^{\prime}$. Formally, the loss function for fine-tuning the GPT-2 model is defined by \begin{equation} \begin{aligned} \mathcal{L}_G = &-\frac{1}{N} \sum_{i=1}^{N} \left(z_i \sum_{t=1}^{l_r} \log P(r_{i,t} | U_i, \bar{D}_i, r_{i,1:t-1}) \right.\\ &\left. +(1-z_i) \sum_{t=1}^{l_r} \log P(r_{i,t} | U_i, D_i^{\prime}, r_{i,1:t-1}) \right), \label{eq:gpt2} \end{aligned} \end{equation} where $\{z_i\}$ are sampled from a Bernoulli distribution parameterized by $p$. By gradually shrinking $p$, the generation model will be exposed to more hard materials with the learning procedure going on. \section{Experiments} We conduct experiments on Wizard of Wikipedia (Wizard) and CMU Document Grounded Conversations (CMU$\_$DoG) \cite{zhou2018dataset}. \subsection{Datasets and Evaluation Metrics} Both datasets are built with crowd-sourcing on Amazon Mechanical Turk, employ Wikipedia as the knowledge base, and are split into training sets, validation sets, and test sets by the data owners. Topics in Wizard cover a wide range ($1,365$ in total), and each conversation happens between a wizard who has access to the knowledge about a specific topic and an apprentice who is just eager to learn from the wizard about the topic. The test set is split into two subsets: Test Seen and Test Unseen. Test Seen contains new dialogues with topics appearing in the training set, while topics in Test Unseen never appear in the training set and the validation set. We follow \cite{dinan2018wizard} and conduct the pre-processing with the code published on ParlAI\footnote{\scriptsize\url{https://github.com/facebookresearch/ParlAI/blob/master/projects/wizard\_of\_wikipedia}}. Different from Wizard, CMU$\_$DoG focuses on movie domain, and besides wizard-apprentice conversations, the data also contain conversations between two workers who know the document and try to discuss the content in depth. To better compare with the baselines, we adopt the version shared at \url{https://github.com/lizekang/ITDD}. In both data, only the turns where knowledge is accessible are considered in response generation. More details are described in supplementary material. We choose perplexity (PPL) of the ground-truth responses, BOW Embedding~\citep{liu2016not}, and unigram F1 \cite{dinan2018wizard} as metrics, where Embedding-based metrics are computed with an NLG evaluation open source available at \url{https://github.com/Maluuba/nlg-eval}, and F1 is calculated with the code published at \url{https://github.com/facebookresearch/ParlAI/blob/master/parlai/core/metrics.py}. Besides automatic evaluation, we randomly sample $300$ examples from Test Seen, Test Unseen, and the test set of CMU$\_$DoG respectively, and recruit $3$ well-educated native speakers as annotators for human evaluation. To each annotator, an example is presented with a context, the associated external knowledge\footnote{For ease of labeling, only the ground-truth knowledge is shown to the annotators in Wizard.}, and model responses (top 1 in greedy search) that are randomly shuffled to hide their sources. The annotators then judge the quality of the responses from three aspects, including \textit{fluency}, \textit{context coherence} and \textit{knowledge relevance}, and assign a score in $\{0, 1, 2\}$ (representing ``bad'', ``fair'', and ``good'') to each response for each aspect. Each response receives $3$ scores per aspect, and the agreement among the annotators is measured via Fleiss' kappa \cite{fleiss1971measuring}. \begin{table*}[h!] \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \noalign{\hrule height 1pt} \multicolumn{1}{c|}{\multirow{2}{*}{Models}} & \multicolumn{5}{c|}{Test Seen} & \multicolumn{5}{c}{Test Unseen} \\ \cline{2-11} \multicolumn{1}{c|}{} & PPL & F1 & Average & Extrema & Greedy & PPL & F1 & Average & Extrema & Greedy \\ \hline TMN~\citep{dinan2018wizard} & 66.5 & 15.9 & 0.844 & 0.427 & 0.658 & 103.6 & 14.3 & 0.839 & 0.408 & 0.645 \\ ITDD~\citep{li2019incremental} & 17.8 & 16.2 & 0.841 & 0.425 & 0.654 & 44.8 & 11.4 & 0.826 & 0.364 & 0.624 \\ SKT*~\citep{kim2020sequential} & 52.0 & 19.3 & 0.846 & 0.440 & 0.665 & 81.4 & 16.1 & 0.839 & 0.418 & 0.652 \\ DRD~\citep{zhao2020low} & 19.4 & 19.3 & 0.852 & 0.452 & 0.674 & 23.0 & 17.9 & 0.849 & 0.439 & 0.664 \\ \hline SKT+GPT-2* & 17.6 & 20.3 & 0.866 & 0.460 & 0.679 & 23.7 & 17.8 & 0.860 & 0.437 & 0.664 \\ GPT-2$_{trunc}$ & 14.6(2.2) & 18.7(0.7) & 0.864(0.002) & 0.451(0.006) & 0.674(0.004) & 16.9(3.1) & 18.3(0.6) & 0.862(0.002) & 0.444(0.005) & 0.668(0.003) \\ \hline KnowledGPT & 19.2 & \textbf{22.0} & \textbf{0.872} & \textbf{0.463} & \textbf{0.682} & 22.3 & \textbf{20.5} & \textbf{0.870} & 0.452 & \textbf{0.674} \\ \noalign{\hrule height 1pt} \end{tabular} } \caption{Evaluation results on Wizard. Models that leverage human labels are marked with *. Numbers in bold mean that the improvement to the best baseline is statistically significant (t-test with $p$-value $<$ 0.01).} \label{tab:wizard_exp} \end{table*} \begin{table}[h!] \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{l|c|c|c|c|c} \noalign{\hrule height 1pt} \multicolumn{1}{c|}{Models} & PPL & F1 & Average & Extrema & Greedy \\ \hline TMN~\citep{dinan2018wizard} & 75.2 & 9.9 & 0.789 & 0.399 & 0.615 \\ ITDD~\citep{li2019incremental} & 26.0 & 10.4 & 0.748 & 0.390 & 0.587 \\ DRD~\citep{zhao2020low} & 46.1 & 10.8 & 0.791 & 0.406 & 0.613 \\ \hline GPT-2$_{trunc}$ & 18.6 & 10.8 & 0.730 & 0.419 & 0.597 \\ \hline KnowledGPT & 20.6 & \textbf{13.5} & \textbf{0.837} & \textbf{0.437} & \textbf{0.654} \\ \noalign{\hrule height 1pt} \end{tabular} } \caption{Evaluation results on CMU$\_$DoG. Numbers in bold mean that the improvement to the best baseline is statistically significant (t-test with $p$-value $<$ 0.01).} \label{tab:cmudog_exp} \end{table} \begin{table*}[] \resizebox{1.0\linewidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c} \noalign{\hrule height 1pt} \multicolumn{1}{c|}{\multirow{3}{*}{Models}} & \multicolumn{8}{c|}{Wizard} & \multicolumn{4}{c}{\multirow{2}{*}{CMU$\_$DoG}} \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{Test Seen} & \multicolumn{4}{c|}{Test Unseen} & \multicolumn{4}{c}{} \\ \cline{2-13} \multicolumn{1}{c|}{} & Fluency & \begin{tabular}[c]{@{}c@{}}Context\\ Coherence\end{tabular} & \begin{tabular}[c]{@{}c@{}}Knowledge\\ Relevance\end{tabular} & Kappa & Fluency & \begin{tabular}[c]{@{}c@{}}Context\\ Coherence\end{tabular} & \begin{tabular}[c]{@{}c@{}}Knowledge\\ Relevance\end{tabular} & Kappa & Fluency & \begin{tabular}[c]{@{}c@{}}Context\\ Coherence\end{tabular} & \begin{tabular}[c]{@{}c@{}}Knowledge\\ Relevance\end{tabular} & Kappa \\ \hline DRD & 1.71 & 1.50 & 1.26 & 0.67 & 1.64 & 1.44 & 1.18 & 0.69 & 1.58 & 1.48 & 1.07 & 0.60 \\ \hline GPT-2$_{trunc}$ & 1.86 & 1.54 & 1.22 & 0.71 & 1.84 & 1.47 & 1.20 & 0.59 & 1.83 & 1.58 & 1.06 & 0.64 \\ \hline KnowledGPT & 1.89 & 1.67 & 1.71 & 0.70 & 1.88 & 1.60 & 1.68 & 0.73 & 1.83 & 1.65 & 1.50 & 0.77 \\ \noalign{\hrule height 1pt} \end{tabular} } \caption{Human evaluation results on Wizard and CMU$\_$DoG.} \label{tab:human} \end{table*} \subsection{Baselines} The following models are selected as baselines: \textbf{Transformer Memory Network (TMN):} the model proposed in \cite{dinan2018wizard} along with the release of the Wizard data. We implement it using the code shared at {\url{https://github.com/facebookresearch/ParlAI/blob/master/projects/wizard_of_wikipedia}}. \textbf{Incremental Transformer with Deliberation Decoder (ITDD):} a transformer-based model \cite{li2019incremental} that incrementally encodes multi-turn dialogues and knowledge and decodes responses with a deliberation technique. We implement it using the code shared at \url{https://github.com/lizekang/ITDD}. \textbf{Sequential Knowledge Transformer (SKT):} a sequential latent variable model with state-of-the-art performance on knowledge selection published in a very recent paper \citep{kim2020sequential}. Since human labels that indicate ground-truth knowledge are crucial to the performance of the model, we only involve it as a baseline on the Wizard data. The model is implemented with the code shared at \url{https://github.com/bckim92/sequential-knowledge-transformer}. \textbf{Disentangled Response Decoder (DRD):} a model that tackles the low-resource challenge with pre-training techniques \cite{zhao2020low}. We choose the one in which all parameters are fine-tuned with the full training data after pre-training as the baseline, since such a configuration results in state-of-the-art performance on Wizard, as reported in \cite{zhao2020low}. We name our model \textbf{KnowledGPT}. Besides the baselines described above, the following pre-trained models are also included in comparison in order to have a thorough understanding towards the proposed method: (1) \textbf{GPT-2$_{trunc}$}. We concatenate a context and the associated knowledge as a long document, and then truncate the document to meet the length constraint of the GPT-2 model. This is to check if the simple heuristics work for the task. Note that in Wizard, we randomly mix the ground-truth knowledge with others and repeat the procedure $8$ times. The means with standard deviation (i.e., numbers in ``( )'') are reported to remove randomness; and (2) \textbf{SKT+GPT-2}. We feed the candidate selected by SKT to GPT-2 for response generation. This is to examine if we can simply replace the proposed knowledge selection module as well as the learning approach with an off-the-shelf knowledge selection model. Similar to SKT, the comparison is only conducted on Wizard. \subsection{Implementation Details} In both Wizard and CMU\_DoG, we set the hidden size and the number of layers of the sequential knowledge selector as $256$ and $1$ respectively. $T_{max}$ for $D^{\prime}$ is set as $1$ in Wizard, and $2$ in CMU\_DoG. We choose BERT (110M) and GPT-2 (117M) as the pre-trained language models in KnowledGPT, and implement the models with the code in \url{https://github.com/huggingface/transformers}. We employ greedy search in response decoding. All models are learned with Adam \cite{kingma2014adam} optimizer with $\beta_1=0.9$ and $\beta_2=0.999$. In warming up, we define $\operatorname{Sim}(\cdot,\cdot)$ as unigram F1, and optimize $g(U,D)$ and the GPT-2 model with the pseudo ground-truth for $1000$ steps with a batch size of $64$. In joint optimization, the batch size is set as $128$, and the learning rates for $g(U,D)$ and GPT-2 are set as $5e-6$ and $5e-5$ respectively. The learning rate will be halved if there is no improvement in terms of PPL on the validation sets. The parameter $p$ of the Bernoulli distribution in the curriculum step is initially set as $1.0$ and anneals with a rate of $1e-5$. Early stopping on validation is adopted as a regularization strategy. \begin{table*}[] \resizebox{1.0\linewidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \noalign{\hrule height 1pt} \multicolumn{1}{c|}{\multirow{3}{*}{Models}} & \multicolumn{10}{c|}{Wizard} & \multicolumn{5}{c}{\multirow{2}{*}{CMU$\_$DoG}} \\ \cline{2-11} \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{Test Seen} & \multicolumn{5}{c|}{Test Unseen} & \multicolumn{5}{c}{} \\ \cline{2-16} \multicolumn{1}{c|}{} & PPL & F1 & Average & Extrema & Greedy & PPL & F1 & Average & Extrema & Greedy & PPL & F1 & Average & Extrema & Greedy \\ \hline KnowledGPT & 19.2 & 22.0 & 0.872 & 0.463 & 0.682 & 22.3 & 20.5 & 0.870 & 0.452 & 0.674 & 20.6 & 13.5 & 0.837 & 0.437 & 0.654 \\ \hline -pseudo & 22.3 & 18.3 & 0.857 & 0.436 & 0.662 & 24.1 & 17.9 & 0.854 & 0.430 & 0.655 & 23.2 & 12.9 & 0.815 & 0.440 & 0.639 \\ -joint & 20.0 & 20.4 & 0.863 & 0.457 & 0.675 & 21.8 & 19.5 & 0.861 & 0.451 & 0.669 & 22.6 & 11.7 & 0.806 & 0.438 & 0.635 \\ -curriculum & 19.4 & 21.2 & 0.867 & 0.457 & 0.677 & 21.5 & 20.3 & 0.866 & 0.451 & 0.672 & 21.9 & 12.4 & 0.816 & 0.443 & 0.644 \\ -reinforcement & 19.4 & 21.3 & 0.866 & 0.459 & 0.677 & 21.9 & 20.2 & 0.863 & 0.449 & 0.670 & 20.3 & 12.6 & 0.817 & 0.437 & 0.643 \\ \noalign{\hrule height 1pt} \end{tabular} } \caption{Ablation study on Wizard and CMU$\_$DoG} \label{tab:abl} \end{table*} \subsection{Evaluation Results} Table \ref{tab:wizard_exp} and Table \ref{tab:cmudog_exp} report evaluation results on Wizard and CMU$\_$DoG respectively. KnowledGPT achieves new state-of-the-art on most metrics in both datasets, which demonstrates the effectiveness of large-scale pre-trained language models on the task of knowledge-grounded dialogue generation. GPT-2$_{trunc}$ is worse than KnowledGPT, due to (1) knowledge loss: we find that in $53\%$ test examples (Test Seen+Test Unseen), the ground-truth knowledge is cut. In this case, GPT-2$_{trunc}$ only relies on the context, the related knowledge in other candidates (thanks to the one-to-many relations between a context and knowledge), and the knowledge packed in the parameters of GPT-2 for responding, which explains the comparable performance with SKT and DRD; and (2) noisy input: even though the ground-truth knowledge is kept, the redundant and irrelevant information in the knowledge candidates are still harmful. Evidence is that GPT-2$_{trunc}$ is worse than KnowledGPT on CMU\_DoG even though we do not cut anything on the knowledge (the maximum length of the knowledge input is $502$, and thus is within the constraint of GPT-2). KnowledGPT also outperforms SKT+GPT-2 on Wizard, because (1) KnowledGPT is more accurate than SKT on knowledge selection, even though it does not leverage any human annotations in learning. In fact, the accuracy scores of knowledge selection for SKT are $26.8$ and $18.3$ on Test Seen and Test Unseen respectively, while the two numbers are $28.0$ and $25.4$ respectively for KnowledGPT; and (2) in KnowledGPT, knowledge selection and response generation are jointly optimized. Table \ref{tab:human} shows human evaluation results. While the three models are comparable on \textit{fluency}, KnowledGPT is superior to the others on both \textit{context coherence} and \textit{knowledge relevance}, which is consistent with the results on automatic metrics. All kappa values are no less than $0.6$, indicating substantial agreement among the annotators. We present a case study in supplementary material. \subsection{Discussions} \textbf{Ablation study.} To understand the impact of the learning strategies on model performance, we compare the full KnowledGPT with the following variants: (1) \textit{-pseudo}: the warming up stage is removed; (2) \textit{-joint}: the joint optimization stage is removed; (3) \textit{-reinforcement}: $g(U,D)$ is fixed after it is optimized with MLE on $\mathcal{D}_{K}$; and (4) \textit{-curriculum}: GPT-2 is fixed after it is optimized with MLE on $\mathcal{D}_{G}$. Table \ref{tab:abl} reports the evaluation results. We can conclude that (1) the pseudo ground-truth plays a crucial role in Wizard, as removing the step causes dramatic performance drop. This is because in Wizard, there is a strong correlation between the knowledge and human responses. The results indicate that though the pseudo ground-truth is constructed with heuristics, it still contains valuable information and thus allows the following joint optimization to start from a good point. On the other hand, in CMU\_DoG, the crowd-workers do not refer to the external knowledge as much as those workers do in Wizard when they form the responses; (2) the reinforcement step and curriculum step are useful because the reinforcement step allows the knowledge selection module to make better use of GPT-2's feedback, and through the curriculum step GPT-2 can take advantage of the output of knowledge selection module progressively; (3) joint optimization is meaningful, as removing this stage results in performance drop. \begin{table}[] \centering \resizebox{0.85\linewidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c} \noalign{\hrule height 1pt} \multicolumn{1}{c|}{\multirow{3}{*}{Models}} & \multicolumn{4}{c|}{Wizard} & \multicolumn{2}{c}{\multirow{2}{*}{CMU\_DoG}} \\ \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Test Seen} & \multicolumn{2}{c|}{Test Unseen} & \multicolumn{2}{c}{} \\ \cline{2-7} \multicolumn{1}{c|}{} & PPL & F1 & PPL & F1 & PPL & F1 \\ \hline T$_{max}$=1 & 19.2 & 22.0 & 22.3 & 20.5 & 20.6 & 12.6 \\ \hline T$_{max}$=2 & 18.2 & 21.3 & 21.0 & 20.3 & 20.6 & 13.5 \\ \hline T$_{max}$=3 & 17.2 & 21.1 & 20.2 & 20.3 & 19.7 & 11.2 \\ \noalign{\hrule height 1pt} \end{tabular} } \caption{Performance of KnowledGPT under different $T_{max}$s.} \label{tab:knowl_len} \end{table} \textbf{Impact of $T_{max}$ (i.e., the upper bound in knowledge selection).} Besides the learning strategies, we are also curious about how $T_{max}$, as part of the termination criterion in knowledge selection described at the end of Section \ref{KS}, influences the performance of KnowledGPT. To this end, we vary the value of $T_{max}$ in $\{1,2,3\}$ and report the evaluation results in Table \ref{tab:knowl_len}. The larger $T_{max}$ is, the more chances KnowledGPT has to involve the ground-truth candidate into generation, and the lower PPL is. This also explains why the PPL of GPT-2$_{trunc}$ is lower than that of KnowledGPT in Table \ref{tab:wizard_exp} and Table \ref{tab:cmudog_exp}. On the other hand, a larger $T_{max}$ also means more noise in generation. That is why when $T_{max}$ exceeds a value, F1 begins to drop. \section{Conclusions} We apply large-scaled pre-trained language models to the task of knowledge-grounded dialogue generation. To this end, we devise a knowledge selection module, and propose an unsupervised approach to jointly optimizing knowledge selection and response generation. Evaluation results on two benchmarks indicate that our model can significantly outperform state-of-the-art methods. \section{Details of Datasets}\label{app:dataset} Table \ref{tbl:stat} reports the statistics of the Wizard data and the CMU$\_$DoG data. \begin{table}[H] \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c} \noalign{\hrule height 1pt} \multirow{2}{*}{} & \multicolumn{4}{c|}{Wizard of Wikipedia} & \multicolumn{3}{c}{CMU$\_$DoG} \\ \cline{2-8} & Train & Valid & Test Seen & Test Unseen & Train & Valid & Test \\ \hline $\#$ Utterances & 166,787 & 17,715 & 8,715 & 8,782 & 74,717 & 4,993 & 13,646 \\ \hline $\#$ Conversations & 18,430 & 1,948 & 965 & 968 & 3,373 & 229 & 619 \\ \hline $\#$ Topics/Documents & 1,247 & 599 & 533 & 58 & 30 & 30 & 30 \\ \hline Avg. $\#$ of Turns & 9.0 & 9.1 & 9.0 & 9.1 & 22.2 & 21.8 & 22.0 \\ \noalign{\hrule height 1pt} \end{tabular} } \caption{Statistics of the two datasets.} \label{tbl:stat} \end{table} \section{Comparison with DialoGPT} We compare KnowledGPT and with DialoGPT in order to learn if a pre-trained generation model with state-of-the-art performance on open domain dialogues is already good enough when it is fine-tuned with knowledge-grounded dialogues. We discard the associated knowledge and fine-tune DialoGPT on the knowledge-grounded dialogues. We choose the model trained from OpenAI GPT-2 with $345$M parameters, as it shows the best performance in the evaluation in the original paper. The model is implemented based on the code shared at \url{https://github.com/microsoft/DialoGPT}. Table \ref{tab:dialogpt} shows the results, indicating that external knowledge is necessary even though one has exploited a powerful pre-trained language model for dialogue generation. In CMU$\_$DoG the gap between DialoGPT and KnowledGPT is narrowed because about 35\% of the conversation has a weak correlation with the document (e.g. BLEU $<$ 0.1). \begin{table}[H] \resizebox{1.0\linewidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c} \noalign{\hrule height 1pt} \multirow{3}{*}{Models} & \multicolumn{4}{c|}{Wizard} & \multicolumn{2}{c}{\multirow{2}{*}{CMU$\_$DoG}} \\ \cline{2-5} & \multicolumn{2}{c|}{Test Seen} & \multicolumn{2}{c|}{Test Unseen} & \multicolumn{2}{c}{} \\ \cline{2-7} & PPL & F1 & PPL & F1 & PPL & F1 \\ \hline \multicolumn{1}{l|}{DialoGPT} & 16.0 & 17.9 & 20.0 & 16.8 & 16.9 & 12.3 \\ \hline \multicolumn{1}{l|}{KnowledGPT} & 19.2 & 22.0 & 22.3 & 20.5 & 20.6 & 13.5 \\ \noalign{\hrule height 1pt} \end{tabular} } \caption{Comparison with DialoGPT on Wizard and CMU$\_$DoG} \label{tab:dialogpt} \end{table} \section{Impact of Maximum Tokens of GPT-2} To further justify our claims on why GPT-2$_{trunc}$ is worse than KnowledGPT, we keep the ground-truth knowledge in the input sequence of GPT-2 and gradually increase the constraint of the maximum number of tokens on Wizard. As the maximum token limit increases, more irrelevant knowledge is introduced. Note that in practice, one has no way to perfectly locate the ground-truth, and this experiment is only to provide more insights to GPT-2$_{trunc}$. Table \ref{tab:involved} shows the performance of GPT-2$_{trunc}$ with the increase of the maximum number of tokens where Ground-truth Percentage indicates the percentage of ground-truth in the input knowledge. First, when the ground-truth is forced to be kept, GPT-2$_{trunc}$ is always better than the one where the ground-truth is randomly mixed with other candidates and bears the risk to be cut. This echoes our claim that knowledge loss is one of the reasons for the poor performance of GPT-2$_{trunc}$ used with the practical setting. Second, even if ground-truth is retained, once more noise is introduced, the performance of GPT-2$_{trunc}$ will become worse. When the length is limited to $128$ tokens, the PPL of the model is not good, mainly because under this limitation, the input sequence of some cases only contains the dialogue context and response. \begin{table}[] \resizebox{1.0\linewidth}{!}{ \begin{tabular}{l|c|c|c|c|c} \noalign{\hrule height 1pt} \multicolumn{1}{c|}{\multirow{2}{*}{Maximum Tokens}} & \multicolumn{2}{c|}{Test Seen} & \multicolumn{2}{c|}{Test Unseen} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Ground-truth\\ Percentage\end{tabular}} \\ \cline{2-5} \multicolumn{1}{c|}{} & PPL & F1 & PPL & F1 & \\ \hline 128 & 10.8 & 30.9 & 11.6 & 30.4 & 62.3\% \\ \hline 256 & 9.3 & 25.6 & 10.0 & 24.6 & 20.3\% \\ \hline 512 & 9.7 & 21.8 & 10.5 & 21.2 & 8.5\% \\ \hline 768 & 10.1 & 20.6 & 10.7 & 20.2 & 5.5\% \\ \hline 1024 & 10.7 & 19.7 & 11.3 & 19.4 & 4.1\% \\ \noalign{\hrule height 1pt} \end{tabular} } \caption{Performance of GPT-2$_{trunc}$ under different maximum tokens with ground-truth knowledge involved.} \label{tab:involved} \end{table} \section{Impact of the Size of GPT-2} We further check if the performance of KnowledGPT can be further improved when the GPT-2 model is replaced with a larger one. Table \ref{tab:gpt_size} shows the results. Though GPT-2 (345M) can further reduce PPL, it does not bring significant improvement to F1 over GPT-2 (117M), probably because the larger model can not provide more accurate feedback to the knowledge selection module in learning. Therefore, to balance efficacy and cost, GPT-2 (117M) is still favored in practice. \begin{table}[] \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c} \noalign{\hrule height 1pt} \multirow{3}{*}{Models} & \multicolumn{4}{c|}{Wizard of Wikipedia} & \multicolumn{2}{c}{\multirow{2}{*}{CMUDoG}} \\ \cline{2-5} & \multicolumn{2}{c|}{Test Seen} & \multicolumn{2}{c|}{Test Unseen} & \multicolumn{2}{c}{} \\ \cline{2-7} & PPL & F1 & PPL & F1 & PPL & F1 \\ \hline \multicolumn{1}{l|}{KnowledGPT (117M)} & 19.2 & 22.0 & 22.3 & 20.5 & 20.6 & 13.5 \\ \hline \multicolumn{1}{l|}{KnowledGPT (345M)} & 16.1 & 22.0 & 17.9 & 20.6 & 18.1 & 13.4 \\ \noalign{\hrule height 1pt} \end{tabular} } \caption{Performance of KnowledGPT under different sizes of GPT-2.} \label{tab:gpt_size} \end{table} \section{Case Study} Table \ref{tab:case1} and Table \ref{tab:case2} show the examples from Test Seen and Test Unseen of Wizard, each example contains the dialogue context and the background knowledge which is retrieved from Wikipedia given the last two turns of dialogue and the original topic. We can see that KnowledGPT can locate the knowledge more accurately due to its knowledge selection module and reinforcement learning, and make better use of the associated knowledge with the help of curriculum learning. \begin{table*}[] \resizebox{1.0\linewidth}{!}{ \begin{tabular}{cl} \hline \multicolumn{2}{c}{Knowledge (Topic: Cinematography)} \\ \hline \multicolumn{2}{l}{...} \\ \multicolumn{2}{p{800pt}}{Cinematography (also called "direction of photography") is the science or art of motion-picture photography by recording light or other electromagnetic radiation, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as film stock.} \\ \multicolumn{2}{p{800pt}}{Typically, a lens is used to repeatedly focus the light reflected from objects into real images on the light-sensitive surface inside a camera during a questioned exposure, creating multiple images.} \\ \multicolumn{2}{p{800pt}}{With an electronic image sensor, this produces an electrical charge at each pixel, which is electronically processed and stored in a video file for subsequent display or processing.} \\ \multicolumn{2}{p{800pt}}{The result with photographic emulsion is a series of invisible latent images on the film stock, which are later chemically "developed" into a visible image.} \\ \multicolumn{2}{p{800pt}}{The images on the film stock are played back at a rapid speed and projected onto a screen, creating the illusion of motion.} \\ \multicolumn{2}{p{800pt}}{Cinematography finds uses in many fields of science and business as well as for entertainment purposes and mass communication.} \\ \multicolumn{2}{p{800pt}}{The word "cinematography" was created from the greek words ("kinema"), meaning "movement, motion" and ("graphein") meaning "to record", together meaning "recording motion."} \\ \multicolumn{2}{l}{...} \\ \hline \multicolumn{2}{c}{Context} \\ \hline \multicolumn{1}{c|}{B} & Thinking about taking a class on cinematography, the direction of photography . \\ \multicolumn{1}{c|}{A} & I have always wanted to learn that . \\ \multicolumn{1}{c|}{B} & Me too, I have learned its the science and or art of recording motion picture photography . \\ \multicolumn{1}{c|}{A} & That is cool. did you enjoy it . \\ \hline \multicolumn{2}{c}{Response} \\ \hline \multicolumn{1}{c|}{TMN} & I think it is a great idea . It is a science fiction film that combines computer and computer science . \\ \hline \multicolumn{1}{c|}{ITDD} & I did . It was developed in the late 19th century . \\ \hline \multicolumn{1}{c|}{SKT} & Yes , I did . I loved it . It was used to focus the light reflected from objects into real images . \\ \hline \multicolumn{1}{c|}{DRD} & I did enjoy it . It is also called `` direction of photography '' . \\ \hline \multicolumn{1}{c|}{SKT+GPT-2} & I did! I love how it's so easy to focus light reflected from objects into real images on the light-sensitive surface inside a camera. \\ \hline \multicolumn{1}{c|}{GPT-2$_{trunc}$} & Yes, I did, I love the fact that it is the science and art of recording motion picture photography \\ \hline \multicolumn{1}{c|}{KnowledGPT} & Yes, I did, I love cinematography, its used in many fields of science and business as well as for entertainment purposes and mass communication \\ \hline \multicolumn{1}{c|}{Human} & Yes and cinematography skills can also be used in mass marketing and other fields as well \\ \hline \end{tabular} } \caption{A case from Test Seen of Wizard of Wikipedia} \label{tab:case1} \end{table*} \begin{table*}[] \resizebox{1.0\linewidth}{!}{ \begin{tabular}{cl} \hline \multicolumn{2}{c}{Knowledge(Topic: Green)} \\ \hline \multicolumn{2}{l}{...} \\ \multicolumn{2}{p{800pt}}{Green is the color between blue and yellow on the visible spectrum.} \\ \multicolumn{2}{p{800pt}}{It is evoked by light which has a dominant wavelength of roughly 495570 nm.} \\ \multicolumn{2}{p{800pt}}{In subtractive color systems, used in painting and color printing, it is created by a combination of yellow and blue, or yellow and cyan; in the rgb color model, used on television and computer screens, it is one of the additive primary colors, along with red and blue, which are mixed in different combinations to create all other colors.} \\ \multicolumn{2}{p{800pt}}{By far the largest contributor to green in nature is chlorophyll, the chemical by which plants photosynthesize and convert sunlight into chemical energy.} \\ \multicolumn{2}{p{800pt}}{Many creatures have adapted to their green environments by taking on a green hue themselves as camouflage.} \\ \multicolumn{2}{p{800pt}}{Several minerals have a green color, including the emerald, which is colored green by its chromium content.} \\ \multicolumn{2}{p{800pt}}{In the middle ages and renaissance, green was the color commonly associated with merchants, bankers and the gentry, while red was reserved for the nobility.} \\ \multicolumn{2}{p{800pt}}{For this reason, the costume of the "mona lisa" by leonardo da vinci and the benches in the british house of commons are green while those in the house of lords are red.} \\ \multicolumn{2}{p{800pt}}{It also has a long historical tradition as the color of ireland and of gaelic culture.} \\ \multicolumn{2}{l}{...} \\ \hline \multicolumn{2}{c}{Context} \\ \hline \multicolumn{1}{c|}{B} & Green is my favorite color. Many minerals are a green color, including emeralds. I like emeralds too! \\ \multicolumn{1}{c|}{A} & I like green too. My eyes are green. \\ \multicolumn{1}{c|}{B} & So are mine! In gaelic cultures green is very popular, with a long historical tradition. It is known as the color of ireland. \\ \multicolumn{1}{c|}{A} & Yes I do know it is a very popular color in ireland. \\ \hline \multicolumn{2}{c}{Response} \\ \hline \multicolumn{1}{c|}{TMN} & I think it ' s a very interesting subject . It ' s a culture of culture. \\ \hline \multicolumn{1}{c|}{ITDD} & Yes , it is one of the primary colors . \\ \hline \multicolumn{1}{c|}{SKT} & It is a popular color in the us \\ \hline \multicolumn{1}{c|}{DRD} & It is common in nature , especially in plants . \\ \hline \multicolumn{1}{c|}{SKT+GPT-2} & Yes, it is very popular in gaelic culture. \\ \hline \multicolumn{1}{c|}{GPT-2$_{trunc}$} & It is! It is also the second largest island in the british isles. \\ \hline \multicolumn{1}{c|}{KnowledGPT} & Yes, it is! In the middle ages and renaissance, green was the color commonly associated with merchants, bankers and the gentry.\\ \hline \multicolumn{1}{c|}{Human} & Back in the middle ages green was often associated with merchants and bankers. Red was nobility's color of choice. \\ \hline \end{tabular} } \caption{A case from Test Uneen of Wizard of Wikipedia} \label{tab:case2} \end{table*} \subsection*{Acknowledgments} We would like to thank the reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2020AAA0105200), the National Science Foundation of China (NSFC No. 61876196 and NSFC No. 61672058). Rui Yan was sponsored as the young fellow of Beijing Academy of Artificial Intelligence (BAAI). Rui Yan is the corresponding author. \bibliographystyle{acl_natbib}
proofpile-arXiv_059-15747
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Graph labeling was first introduced by Rosa in 1966 \cite{Rosa}. Since then, numerous types of labeling have been subject to extensive study including vertex coloring, graceful labeling, harmonious labeling, $k$-radio labeling, and more. For a survey of graph labeling, see Gallian \cite{Gallian}. This paper will continue a portion of Niedzialomski's work \cite{Amanda} on radio labeling Hamming graphs (which have strong connections to coding theory, see \cite{ChangLuZhou} and \cite{Zhou}), paying some attention to graphs of the form $K_n^t$. These graphs are defined via the Cartesian Product. \begin{definition}\label{cartproddef} Given two simple connected graphs, $G$ and $H$, define the \defn{Cartesian Product}, $G\square H$, to have the vertex set $V(G)\times V(H)$ and edges such that a vertex $(v_i,u_i)\in G\square H$ is adjacent to $(v_j,u_j)\in G\square H$ if $v_i=v_j$ and $u_i$ is adjacent to $u_j$ in $H$ or if $u_i=u_j$ and $v_i$ is adjacent to $v_j$ in $G$ (See Figure \ref{Fig:cartprod}). Write the Cartesian Product of $t$ copies of a graph, $G$, as $G^t$. A \defn{Hamming graph} is a graph of the form $K_{n_1}^{t_1}\square K_{n_2}^{t_2}\square\cdots\square K_{n_m}^{t_m}$, where $K_i$ is the complete graph on $i$ vertices. \end{definition} \begin{figure}[h] \centering \includegraphics[scale=.19]{K3P3All.png} \caption{The Cartesian Product of $K_3$ and $P_3$ with labeled vertices.} \label{Fig:cartprod} \end{figure} When discussing $G=K_n$, denote vertices with integer subscripts, $V(K_n) = \{(v_i)\vert i\in \mathbb{Z}, 1\leq i\leq n\}$. Denote any vertex $v\in V(K_n^t)$ with the ordered $t$-tuple $(v_{i_1},v_{i_2},\ldots ,v_{i_t})$ where $1\leq i_j\leq n$. When indexing is necessary for elements of $V(K_n^t)$, we will use superscripts, e.g. $v^i,v^j\in V(K_n^t)$. As defined earlier, a vertex $v = (v_{i_1},v_{i_2},\ldots, v_{i_{j-1}},v_{i_j},v_{i_{j+1}},\ldots v_{i_t})$ is adjacent to every $(v_{i_1},v_{i_2},\ldots, v_{i_{j-1}},v_{i_k},v_{i_{j+1}},\ldots v_{i_t})$ where $i_j\neq i_k$. And so, if $v^1 = (v_{i_1},v_{i_2},\ldots ,v_{i_t})$ and $v^2 = (v_{j_1},v_{j_2},\ldots ,v_{j_t})$, then $d(v^1,v^2)$ is exactly the number of $k$ for which $i_k\neq j_k$. This shows that $K_n^t$ has diameter $t$.\\ This paper is primarily concerned with the more general Hamming graphs. Unless otherwise specified, the following conventions will be used throughout: $G = K_{n_1}^{t_1}\square\cdots\square K_{n_m}^{t_m}$ where $n_1 < n_2 < \cdots < n_m$. We let $\overline{t}_k:=\sum_{i=1}^kt_i$, and denote vertices $v\in V(G)$ by ordered $t$-tuples $v=(v_{i_1},\ldots,v_{i_t})$ where if $j$ is such that $\overline{t}_{k-1} < j \leq \overline{t}_k$, then $v_{i_j}\in V(K_{n_k})$. That is to say, the first $t_1$ coordinates are from $V(K_{n_1})$ and the next $t_2$ are from $V(K_{n_2})$ and so on. And just as in the case above, we have that the distance between two vertices is the number of coordinates in which the vertices differ. This also means that $G$ has diameter $\overline{t}_m$, which we will simply call $t$. Lastly, we let $N:=\vert V(G)\vert = \prod_{i=1}^mn_i^{t_i}$.\\ Niedzialomski has shown, by construction, that there exist optimal labelings (see Definition \ref{consecutivelabeling}) for $K_n^t$ where $t\leq n$ (for $n\geq 3$) and there cannot exist such labelings when $t\geq 1+\frac{n(n^2-1)}{6}$. One goal of this paper is to work towards filling the gap where $n<t<1+\frac{n(n^2-1)}{6}$ depicted in Figure \ref{Fig:gap} taken from Niedzialomski's paper. \begin{figure}[h] \centering \includegraphics[scale=.75]{Figure_5_1_Stolen.png} \caption{Known results where $rn(G)$ is the smallest codomain, $\mathbb{Z}_n$, needed to label $G$.} \label{Fig:gap} \end{figure} Radio labeling comes from the Channel Assignment Problem of assigning frequencies to radio transmitters, where transmitters that are closer together, must have a bigger difference in frequency to avoid interference. This problem was introduced into graph theory by Hale in 1980 \cite{Hale}. \begin{definition} A $k$-\defn{radio labeling} of a simple connected graph $G=(V,E)$, is a function $f:V\rightarrow\mathbb{Z}^+$ subject to the constraint $$\vert f(v) - f(u)\vert\geq k+1-d(v,u)$$ where $v,u\in V$ are distinct and $d(v,u)$ is the distance between $v$ and $u$ in $G$. This inequality is the \defn{radio condition}. \end{definition} It is easy to see $k$-radio labeling as a generalization of some more familiar labelings. $1$-radio labeling is equivalent to vertex coloring since the radio condition for $k=1$ simply prohibits neighbors from having the same label. $2$-radio labeling is equivalent to $L(2,1)$-labeling which has also been studied quite heavily and was introduced in \cite{GriggsYeh}. For a survey of $2$-labeling, see \cite{Yeh}. \begin{definition} Of particular interest is when $k=$ diam($G$) in which case we simply call $f$ a \textbf{radio labeling}; in this case, $f$ is necessarily injective. \end{definition} Radio labeling was originally introduced in \cite{Chartrand}. This paper will only be concerned with consecutive radio labeling. \begin{definition}\label{consecutivelabeling} If a radio labeling $f$ is a bijection between $V$ and $\{1,2,\ldots, \vert V\vert\}$, we call $f$ a \defn{consecutive radio labeling}. We call any $G$ for which such a labeling exists \defn{radio graceful}. \end{definition} Ideally there would be an algorithm for quickly computing whether a general graph is radio graceful, but no efficient algorithm currently exists. Although the exact general computational complexity of this problem is unknown, the computation is too large for general graphs of even moderate size by any known algorithm. This paper will develop a method to more efficiently compute consecutive radio labelings of Hamming graphs, making use of Niedzialomski's radio labelings induced by vertex orderings. \begin{definition}\label{orddef} Given a simple connected graph, $G=(V,E)$, an \defn{ordering of $V$} is an ordered list, $O = (v^1,v^2,\ldots, v^{\vert V\vert})$ such that $v^i\neq v^j$ for $i\neq j$. \end{definition} Given such an ordering, $O$, one can generate a radio labeling of $G$ by mapping $f(v^1)=1$, and then mapping each vertex in order so that each vertex is sent to the smallest integer possible that still satisfies the radio condition. Put more formally, \begin{definition}\label{induceddef} The \defn{radio labeling induced by $O$} is a function, $f:V\rightarrow\mathbb{Z}^+$, such that $$f(v^1)=1$$ $$f(v^i) = \min\{x\in\mathbb{Z}>f(v^{i-1})\mid (1\leq j<i)\Rightarrow(\vert x-f(v^j)\vert\geq \text{diam}(G) + 1 - d(v^i,v^j))\}$$ \end{definition} \section{Orderings of Hamming Graphs} Our goal is to generate orderings for Hamming graphs that induce consecutive radio labelings, or determine when this is not possible. In this section, we will restate the problem of finding consecutive radio labelings as an equivalent problem involving orderings. This will be accomplished via the following, \begin{prop}\label{rcmatham} If $G = K_{n_1}^{t_1}\square\cdots\square K_{n_m}^{t_m}$ and $O$ is a list containing $N$ elements from $V(G)$, then $O$ is an ordering of $V(G)$ that induces a consecutive radio labeling if and only if for all $1 < i\leq N$ and all $k<t$, $v^i$ and $v^{i-k}$ share at most $k-1$ coordinates, and $O$ contains no repetition, i.e. $v^i\neq v^j$ for $i\neq j$. \end{prop} \begin{proof} Suppose $O = (v^1,\ldots,v^N)$. It is true by Definition \ref{orddef} that $O$ is an ordering of $V(G)$ if and only if it contains no repetition.\\ It is also quite directly from definition that for any given $i$ and $k$ as in the premise, if $O$ induces a consecutive radio labeling, then $f(v^i) = i$ and $f(v^{i-k}) = i-k$ so that $$f(v^i) - f(v^{i-k}) = k \geq t + 1 - d(v^i,v^{i-k})\Longleftrightarrow d(v^i,v^{i-k})\geq t- (k-1)$$ which is equivalent to the statement that $v^i$ and $v^{i-k}$ share at most $k-1$ coordinates.\\ Conversely, suppose that for all $i$ and all $k < t$, $d(v^i,v^{i-k})\geq t-(k-1)$. We need to show that the labeling induced by $O$ will be $f(v^i) = i$. It is sufficient to show that such a function $f$ is a valid radio labeling, since then it must be the induced labeling from $O$ due to Definition \ref{induceddef}. It is certainly true that any two vertices listed in $O$ less than $t$ apart will satisfy the radio condition by the previous argument (which is reversible). As for the case when $k\geq t$, then $f(v^i) - f(v^{i-k}) = k \geq t \geq t + 1 - d(v^i,v^{i-k})$ where $i$ is fixed. Therefore, $f(v^i) = i$ satisfies the radio condition as desired. \end{proof} \begin{corollary}\label{rcmat} If $O$ is a list containing $n^t$ elements from $V(K_n^t)$, then $O$ is an ordering of $V(K_n^t)$ that induces a consecutive radio labeling if and only if for all $1 < i\leq n^t$ and all $k<t$, $v^i$ and $v^{i-k}$ share at most $k-1$ coordinates, and $O$ contains no repetition, i.e. $v^i\neq v^j$ for $i\neq j$. \end{corollary} When $G = K_{n_1}^{t_1}\square\cdots\square K_{n_m}^{t_m}$, we will write any ordering, $O = (v^1,\ldots,v^N)$, as a $N\times t$ matrix where the $i$\textsuperscript{th} row of $O$ is $v^i = (v^i_1,v^i_2,\ldots ,v^i_t)$ where each $v^i_j = v_l\in V(K_{n_k})$ for some $l$ and the appropriate $n_k$. In particular, orderings of $K_n^t$ are $n^t\times t$ matrices containing elements from $V(K_n)$ (See Figure \ref{Fig:K32Ord}). From now on, all orderings will be these matrices.\\ Proposition \ref{rcmatham} allows us to essentially rewrite the radio condition in this new context of $N\times t$ matrices. This transforms our problem from graph labeling to trying to generate a matrix satisfying certain properties; namely, that row $i$ may be identical to row $i-k$ in at most $k-1$ places.\\ However, this transition from graph labeling to matrix generation means that we not only need to mind the radio condition but also avoid repetition as is mentioned in Proposition \ref{rcmatham}. Since our focus will almost always be on the radio condition with no regard for repetition, the following definition is necessary. \begin{definition} A \textbf{weak ordering} of $V(K_{n_1}^{t_1}\square\cdots\square K_{n_m}^{t_m})$ is an $N\times t$ matrix where the elements of column $j$ are chosen from $V(K_{n_k})$ when $\overline{t}_{k-1}<j\leq\overline{t}_k$. The set of all orderings is the subset of the set of all weak orderings whose rows have no repetition. \end{definition} \begin{figure}[h] \centering \includegraphics[scale=.15]{2K3All.png} \includegraphics{K32Mat.png}\includegraphics[scale=.15]{K32Radio.png} \caption{The Matrix seen above (bottom) is an ordering of the vertices of $K_3^2$ (top) that induces a consecutive radio labeling.} \label{Fig:K32Ord} \end{figure} Lastly, the following definition and lemma will be helpful anytime we wish to relabel our $v_i$ in orderings. \begin{definition}\label{permutecoldef} Define $O_{u,\sigma}$ as follows. Given $O$, an ordering of $V(G)$, let $u = (v_j^i)_{i=1}^{N}$ be a column of $O$, and let $\sigma\in S_{n_k}$ be a permutation where $k$ is such that $u$ has elements from $V(K_{n_k})$. Then replacing $u$ in $O$ with the new column $(v_{\sigma(l_{i,j})})_{i=1}^{N}$ (where $v_j^i = v_{l_{i,j}}$) yields a new ordering, call it $O_{u,\sigma}$. \end{definition} \begin{lemma}\label{permutecol} If $O$ induces a consecutive radio labeling (of $V(K_{n_1}^{t_1}\square\cdots\square K_{n_m}^{t_m})$), $u$ is a column of $O$, and $\sigma\in S_{n_k}$ for the appropriate $k$, then $O_{u,\sigma}$ also induces a consecutive radio labeling. \end{lemma} \begin{proof} This is a direct result of Proposition \ref{rcmatham}, since permuting the elements of a column as described does not change the number of coordinates shared between any pair of rows. \end{proof} One way to view the Cartesian product in this context is to see each coordinate of a vertex, and hence column of an ordering $O$, as corresponding to a copy of $K_{n_l}$ for some $l$, where each vertex (row) in the product is a choice of one vertex from each of these complete graphs of which there are $t$ in total. In light of this, the process of permuting a column as in Definition \ref{permutecoldef} is exactly the same as relabeling the vertices of the copy of $K_{n_k}$ to which this column corresponds. In general, this fact that each column is somewhat independent (as far as Proposition \ref{rcmatham} is concerned) is what allows us to generalize many results from graphs of the form $K_n^t$ to more general Hamming graphs. \section{Bounds on labeling} In Niedzialomski's paper \cite{Amanda}, it is shown that for any graph $G$, there is some integer $t$ for which $G^t$ is not radio graceful. Specifically there is a bound for $K_n^t$ stated here as Corollary \ref{upperbound}. I will present an analogous bound for the more general case of Hamming Graphs. In the following proofs of bounds, we will consider orderings, and derive a contradiction if $t$ is too much larger than $n$.\\ I will demonstrate the idea behind the proof method of the bound with the example of $K_3^5$ which I will show to be not radio graceful. (It may be helpful to follow along in Figure \ref{Fig:K35Ord}) \begin{example} Suppose for the sake of contradiction that there exists an ordering, $O$, of $V(K_3^5)$ which induces a consecutive radio labeling. Consider the segment of $O$ from $v^i$ to $v^{i+3}$ for any appropriate $i$ (i.e. $1\leq i\leq 3^5-3$). Due to Lemma \ref{permutecol}, we can permute each column so that $v_i = (v_1,v_1,v_1,v_1,v_1)$ and $v^{i+1} = (v_2,v_2,v_2,v_2,v_2)$, we can do this since we know that $v^i$ and $v^{i+1}$ share no coordinates. Next, $v^{i+2}$ can share at most one coordinate with $v^i$ and must be different from both $v^i$ and $v^{i+1}$ in all other coordinates. Thus, using Lemma \ref{permutecol} again, we may now permute the last four columns of $O$ so that the last four coordinates of $v^{i+2}$ become $v_3$ (different from both $v_1$ and $v_2$). Lastly, consider $v^{i+3}$; the first coordinate is no longer under our control to permute as it was not fixed in $v^{i+2}$ meaning that we can't ensure that any permutation involving this column wouldn't change our current setup. However, we do know that $v^{i+3}$ may share at most two places with $v^i$, and one more with $v^{i+2}$. However, this leaves still one coordinate which must then be different from all the previous vertices in this segment, and this is impossible since our only choices are $v_1,v_2,$ and $v_3$. Therefore, there does not exist an ordering, $O$, that induces a consecutive radio labeling on $K_3^5$. Additionally, from this construction we can conclude that in any valid ordering for $K_3^4$, $v^i$ must share exactly one coordinate with $v^{i+2}$, exactly two with $v^{i+3}$, and $v^{i+1}$ must also share a coordinate with $v^{i+3}$, otherwise we have the same problem. \begin{figure}[h] \begin{center} $\begin{bmatrix} v^i\\v^{i+1}\\v^{i+2}\\v^{i+3} \end{bmatrix} = \left[\begin{array}{ccccccccccc} v_1&v_1&v_1&v_1&v_1\\ v_2&v_2&v_2&v_2&v_2\\ v_1?&v_3&v_3&v_3&v_3\\ ?&v_2?&v_1?&v_1?&\textbf{X} \end{array}\right]$ \end{center} \caption{A segment of an ordering of $K_3^5$ showing that there is no possible fourth vertex.} \label{Fig:K35Ord} \end{figure} \end{example} See Figure \ref{Fig:K411Ord} for a demonstration of this contradiction with $K_4^{11}$ (because $1+\frac{4(16-1)}{6}=11$), although this figure can also be viewed as a demonstration for $K_3^4\square K_4^7$. \begin{figure}[h] \begin{center} $\begin{bmatrix} v^i\\v^{i+1}\\v^{i+2}\\v^{i+3}\\v^{i+4} \end{bmatrix} = \left[\begin{array}{ccccccccccc} v_1&v_1&v_1&v_1&v_1&v_1&v_1&v_1&v_1&v_1&v_1\\ v_2&v_2&v_2&v_2&v_2&v_2&v_2&v_2&v_2&v_2&v_2\\ v_1?&v_3&v_3&v_3&v_3&v_3&v_3&v_3&v_3&v_3&v_3\\ ?&v_2?&v_1?&v_1?&v_4&v_4&v_4&v_4&v_4&v_4&v_4\\ ?&?&?&?&v_3?&v_2?&v_2?&v_1?&v_1?&v_1?&\textbf{X} \end{array}\right]$ \end{center} \caption{A segment of an ordering of $K_4^{11}$ showing that there is no possible fifth vertex.} \label{Fig:K411Ord} \end{figure} The big idea is that because of the radio condition (viewed as in Proposition \ref{rcmatham}), we only have so many columns in which we can differ with previous vertices, so that many coordinates must be different from those above them and eventually this isn't possible because we run out of options (i.e. elements of $V(K_n)$).\\ \begin{definition}\label{alphadef} Given $O$, an ordering of $V(G)$ that induces a consecutive radio labeling, define $\alpha_k^i$ to be the number of columns, $j$, of $O$ for which the collection $\{v_j^i,v_j^{i+1},\ldots,v_j^{i+k}\}$ is made up of pairwise distinct elements. For example, for any $i$, $\alpha_0^i = \alpha_1^i = t$ and $\alpha_2^i$ is either $t$ or $t-1$. \end{definition} \begin{prop}\label{alphastep} For any non-negative integer $k$ and any possible $i$, $$\alpha_{k+1}^i\geq \alpha_k^i - \sum_{b=1}^kb.$$ \end{prop} \begin{proof} This is a direct consequence of Proposition \ref{rcmatham} since $v^{i+k+1}$ may share at most $k+(k-1)+\cdots+1+0$ total coordinates with $v^i,v^{i+1},\ldots,v^{i+k}$ so that there are at most $\sum_{b=1}^kb$ columns not counted in $\alpha_{k+1}^i$ that were counted in $\alpha_k^i$. \end{proof} \begin{prop}\label{alphabound} For any $1\leq k\leq m$ and any possible $i$, $$\alpha^i_{n_k}\leq t - \overline{t}_k$$ \end{prop} \begin{proof} For any such $k$ and $i$, there are $\overline{t}_k$ columns of $O$ which are chosen from $V(K_{n_i})$ where $n_i\leq n_k$ so that none of these columns can count towards $\alpha_{n_k}^i$ since it cannot be that $\{v^i_j,\ldots,v^{i+n_k}_j\}$ are pairwise distinct if these $n_k+1$ elements are chosen from a set of $n_k$ or fewer elements. There are $t$ total columns so that $t-\overline{t}_k$ is an upper bound on $\alpha_{n_k}^i$. \end{proof} With all of this under our belt, we are now ready to prove the bound. Note that the following corollary is found in (\cite{Amanda}, Corollary 12) and this is simply a different and more direct proof (in that if $G = K_n^t$, i.e. $m=1$, then the following is a direct proof) using the construction above that is defined for the more general Hamming graphs. \begin{theorem}\label{upperboundgeneral} Given $G = K_{n_1}^{t_1}\square\cdots\square K_{n_m}^{t_m}$, if $\overline{t}_k\geq 1 + \frac{n_k(n_k^2-1)}{6}$ for any $k$, then $G$ is not radio graceful. \end{theorem} \begin{proof} Suppose, for the sake of contradiction, that $G$ is radio graceful where $\overline{t}_k\geq 1 + \frac{n_k(n_k^2-1)}{6}$. Then there exists an ordering $O$ of $V(G)$ that induces a consecutive radio labeling. Let $i$ be any fixed positive integer less than $N-n_k$. By Proposition \ref{alphastep}, the following string of inequalities hold, $$\alpha_{n_k}^i\geq \alpha_{n_k-1}^i-\sum_{b=1}^{n_k-1}b\geq \alpha_{n_k-2}^i - \sum_{b=1}^{n_k-2}b - \sum_{b=1}^{n_k-1}b\geq\cdots\geq\alpha_1^i - \sum_{a=1}^{n_k-1}\sum_{b=1}^ab = t - \frac{n_k(n_k^2-1)}{6} > t-\overline{t}_k$$ But $\alpha_{n_k}^i > t-\overline{t}_k$ contradicts Proposition \ref{alphabound}. Therefore $G$ is not radio graceful when $\overline{t}_k\geq 1 + \frac{n_k(n_k^2-1)}{6}$. \end{proof} \begin{corollary}\label{upperbound} If $t\geq 1 + \frac{n(n^2-1)}{6}$, then $K_n^t$ is not radio graceful. \end{corollary} Furthermore, we can by a similar method, using Definition \ref{alphadef}, prove something constructive about graphs that are right underneath our bound. \begin{theorem}\label{BoundaryForcedGeneral} Let $G = K_{n_1}^{t_1}\square\cdots\square K_{n_m}^{t_m}$ with $\overline{t}_k = \frac{n_k(n_k^2-1)}{6}$ for some $k$. If $O$ is any ordering of $V(G)$ that induces a consecutive radio labeling, then for every $i$ and every $j\leq n_k$, $v^i$ and $v^{i+j}$ must share exactly $j-1$ coordinates. \end{theorem} \begin{proof} Suppose, for the sake of contradiction, that for some $i$ and some $j\leq n_k$, $v^i$ and $v^{i+j}$ don't share exactly $j-1$ coordinates. Then it must be the case that they share fewer than $j-1$ coordinates due to Proposition \ref{rcmatham}. This implies that $$\alpha_j^i>\alpha_{j-1}^i - \sum_{b=1}^{j-1}b$$ by Proposition \ref{alphastep} where we cannot have equality since $v^i$ and $v^{i+j}$ share fewer than $j-1$ coordinates. Thus, we can again use Proposition \ref{alphastep} to get the following string of inequalities, $$\alpha_{n_k}^i\geq\alpha_{n_k-1}^i - \sum_{b=1}^{n_k-1}b\geq\cdots\geq\alpha_j^i-\sum_{a=j}^{n_k-1}\sum_{b=1}^ab > \alpha_{j-1}^i-\sum_{a=j-1}^{n_k-1}\sum_{b=1}^ab\geq\cdots\geq \alpha_1^i - \sum_{a=1}^{n_k-1}\sum_{b=1}^ab = t - \overline{t}_k$$ But $\alpha_{n_k}^i > t-\overline{t}_k$ contradicts Proposition \ref{alphabound}. Therefore, if $O$ induces a consecutive radio labeling, then $v^i$ and $v^{i+j}$ must share exactly $j-1$ coordinates. \end{proof} \begin{corollary}\label{BoundaryForced} In any ordering, $O$, of the vertices of $K_n^{\frac{n(n^2-1)}{6}}$ that induces a consecutive radio labeling, $v^i$ and $v^{i+j}$ for every $i$ and every $j\leq n$ must share exactly $j-1$ coordinates. \end{corollary} Lastly, another result from Niedzialomski's paper that I will not be showing here is that if $t\leq n$, then $G = K_n^t$ is radio graceful. She cleverly constructs a consecutive radio labeling using permutations of smaller segments of an ordering for $G$, but this construction does not give a consecutive radio labeling for $K_n^{n+1}$ so we must find a different approach to generating labelings in order to fill the gap in Figure \ref{Fig:gap}. \section{Generating Orderings} Orderings have allowed us to restate the problem of radio labeling as one of constructing matrices that satisfy certain conditions. This abstraction not only makes the problem easier to reason about, especially with Hamming Graphs, but also leads us to certain results rather directly. In this section, I will introduce a new family of matrices that fully encapsulate radio labeling of Hamming Graphs in the hopes that this new paradigm will be helpful in finding new results and closing the gap in Figure \ref{Fig:gap}. These new matrices can be viewed as containing `instructions' for generating an ordering; however, we will often be able to restate the radio condition in the context of these generating matrices so that we need not consider orderings anymore, just as orderings allowed us to essentially disregard the graph concept.\\ Proposition \ref{rcmatham} states the radio condition for orderings in such a way that allows us to view columns as somewhat independent, as has been mentioned earlier. In particular, if we are attempting to generate an ordering and we come to any entry, $v^i_j$, we need only inspect column $j$ when asking how this entry affects our radio condition considerations. As such, we will restrict ourselves to generating columns of orderings and find a mechanism for detecting repetition within a column, i.e. when two rows share a coordinate in this column. \begin{definition} Let $\Delta_n\subset S_n$ containing $n-1$ elements $\Delta_n = \{f_2,\ldots,f_n\}$ such that $f_k(k)=1$ for each $k$. Any $\Delta_n$ satisfying these conditions is called an \textbf{instruction set}. Call the collection $\{\Delta_n^i:S_n^{i-1}\rightarrow\mathcal{P}(S_n)\mid 2\leq i\leq N\}$, denoted simply by $\Delta_n^i$, an \textbf{instruction set generator} if every element in the image of each $\Delta_n^i$ is an instruction set. An instruction set generator can be viewed as a function that takes as input the current state of the column and generates an instruction set containing the possible next instructions. We will use the convention that $\sigma_1\sigma_2 = \sigma_2\circ\sigma_1$ when denoting composition of permutations. \end{definition} \begin{definition}\label{AnBndef} Given integers $N,n\geq 3$, and an instruction set generator, $\Delta_n^i$, define \begin{align*} A_n &= \left\{\vec{u} = \begin{pmatrix} v_{i_1}=v_1\\v_{i_2}=v_2\\v_{i_3}\\\vdots\\v_{i_N} \end{pmatrix}\mid v_{i_j}\in V(K_n), v_{i_j}\neq v_{i_{j+1}}\right\}\\ B_n &= \left\{\vec{u'} = \begin{pmatrix} \sigma_1 = id\\\sigma_2 = f_2\\\sigma_3\\\vdots\\\sigma_N \end{pmatrix}\mid\sigma_i\in\Delta_n^i(\sigma_1,\ldots,\sigma_{i-1})\text{ for }i\geq 2\right\}. \end{align*} Elements of $A_n$ can be thought of columns of orderings which correspond to $K_n$ while elements of $B_n$ will be the columns of our new matrix that can be viewed as generating a corresponding element in $A_n$. \end{definition} We will construct a 1-1 correspondence between $A_n$ and $B_n$ using the following, \begin{definition}\label{actiondef} Let $D_n$ be the subset of $(V(K_n))^n$ whose coordinates are all distinct, i.e. $(v_{s_1},\ldots,v_{s_n})\in D_n$ if and only if $v_{s_i}\in V(K_n)$ for all $1\leq i\leq n$ and $v_{s_i}\neq v_{s_j}$ for all $1\leq i < j\leq n$. Define an action from $S_n$ on $D_n$ by $\sigma\cdot(v_{s_1},\ldots,v_{s_n}) = (v_{s_{\sigma^{-1}(1)}},\ldots,v_{s_{\sigma^{-1}(n)}})$ where $\sigma\in S_n$. We have $S_n$ act on $D_n$ in such a way so that we can talk about $\sigma = (135)$, for example, as being the instruction that sends the first coordinate of an element of $D_n$ to the third place, the third to the fifth and the fifth back to the first. \end{definition} \begin{definition}\label{phidef} We now construct our bijection between $A_n$ and $B_n$. Define $(D_n^N)'$ to be the subset of $D_n^N$ whose elements $\begin{pmatrix}(v_1^1,\ldots,v_n^1)\\\vdots\\(v_1^N,\ldots,v_n^N)\end{pmatrix}$ satisfy the following constraints: that $v_1^j\neq v_1^{j+1}$ for all $j$, $v_i^1 = v_i$ for $1\leq i\leq n$ and $v_1^2 = v_2$.\\ Let $\varphi:B_n\rightarrow (D_n^N)'$ be defined as follows, if $\vec{u'} = \begin{pmatrix} \sigma_1\\\vdots\\\sigma_N \end{pmatrix}\in B_n$, then let $$\varphi(\vec{u'}) = \begin{pmatrix} o_1 = (v_1,\ldots,v_n)\\o_2 = \sigma_2\cdot o_1\\\vdots\\o_N = \sigma_N\cdot o_{N-1} \end{pmatrix}.$$ Note that $\varphi(\vec{u'})\in(D_n^N)'$ because $\sigma_2=f_2$ so that $o_2$ has $2$ as its first coordinate, and since $\sigma^{-1}(1)\neq 1$ for all $\sigma\in\Delta_n$.\\ Next, let $\pi_1:(D_n^N)'\rightarrow A_n$ be the projection to the first coordinate, i.e. $$\pi_1\begin{pmatrix}(v_1^1,\ldots,v_n^1)\\\vdots\\(v_1^N,\ldots,v_n^N)\end{pmatrix} = \begin{pmatrix}v_1^1\\\vdots\\v_1^N\end{pmatrix}.$$ Finally, define $\phi_n:B_n\rightarrow A_n$ by composition $$\phi_n := \pi_1\circ\varphi.$$ \end{definition} \begin{theorem}\label{corr} If $n,N\geq 3$ are any integers, and $\Delta_n^i$ is any instruction set generator, then $\phi_n$, as in Definition \ref{phidef}, is a 1-1 correspondence between $A_n$ and $B_n$. \end{theorem} \begin{proof} Since $A_n$ and $B_n$ both have $(n-1)^{N-2}$ elements, we need only show that $\phi_n$ is surjective. Given $\vec{u} = \begin{pmatrix} v_{i_1}\\\vdots\\v_{i_N} \end{pmatrix}\in A_n$, let $\vec{u''} = \begin{pmatrix} o_1\\\vdots\\o_N \end{pmatrix}$ where $o_1=(v_1,\ldots,v_n)$, $\sigma_1=id\in S_n$, and if $o_j = (v_{s_1},\ldots,v_{s_n})$ and $i_{j+1} = s_k$, then $\sigma_{j+1} = f_k\in\Delta_n^{j+1}(\sigma_1,\ldots,\sigma_j)$ and $o_{j+1} = \sigma_{j+1}\cdot o_j$. By construction, we have that $\pi_1(\vec{u''}) = \vec{u}$ and that $\vec{u''}$ is in $Im(\varphi)$, specifically, $\vec{u'} = \begin{pmatrix} \sigma_1\\\vdots\\\sigma_N \end{pmatrix}\in B_n$ with $\phi_n(\vec{u'}) = \vec{u}$ making $\phi_n$ surjective and therefore bijective. \end{proof} With this theorem, we are ready to restate the problem of radio labeling as a problem of finding matrices that generate orderings. \begin{definition} Given $G = K_{n_1}\square\cdots\square K_{n_t}$, a Hamming graph (the $n_i$ are not necessarily distinct), a \textbf{weak order-generator}, $O'$, for $V(G)$ (with respect to choices of $\Delta_{n_k}^i$ for each $n_k$) is an $N\times t$ matrix where every entry of the first row is $id$, every entry of the second row is $f_2$, and for all but the first row, elements of column $j$ are chosen using $\Delta_{n_j}^i$. We denote the $j$th element of the $i$th row by $\sigma_i^j$. \end{definition} \begin{definition} Given $G = K_{n_1}\square\cdots\square K_{n_t}$ and choices of $\Delta_{n_k}^i$ for each $n_k$, define $\Phi_G$ to be a function from the set of weak order-generators for $V(G)$ to the set of all weak orderings of $V(G)$ whose first two rows are $(v_1,\ldots,v_1)$ and $(v_2,\ldots,v_2)$. Given $O'=[c_1\ \cdots\ c_t]$, $\Phi_G(O') := [\phi_{n_1}(c_1)\ \cdots\ \phi_{n_t}(c_t)]$, i.e. $\Phi_G$ applies the corresponding $\phi_n$ (using the corresponding $\Delta_{n_j}^i$) to each column. Since each of the $\phi_n$ are bijections, so is $\Phi_G$. \end{definition} Notice that given any ordering that induces a consecutive radio labeling of $G$, we can always use Lemma \ref{permutecol} to get a new ordering whose first two rows are $(v_1,\ldots,v_1)$ and $(v_2,\ldots,v_2)$, as required above, that also induces a consecutive radio labeling. \begin{definition} An \textbf{order-generator} of $V(G)$ (with respect to choices of $\Delta_{n_k}^i$ for each $n_k$) is a weak order-generator, $O'$, such that $\Phi_G(O')$ is an ordering, i.e. $\Phi_G(O')$ has no row repetition. Note that if $\Phi_G$ is restricted to only order-generators, it becomes a bijection between all order-generators and all orderings. \end{definition} The following is an example where an ordering of $K_3^2$ that induces a consecutive radio labeling is shown next to the its generator and intermediate construction where $$\Delta_3 = \{f_2 = (12),f_3 = (123)\}.$$ is used for both columns. That is, the constant function $\Delta_3^i = \Delta_3$. $$O=\begin{bmatrix} v_1&v_1\\ v_2&v_2\\ v_3&v_3\\ v_1&v_2\\ v_2&v_3\\ v_3&v_1\\ v_1&v_3\\ v_2&v_1\\ v_3&v_2 \end{bmatrix}\leftrightarrow \varphi(O') = \begin{pmatrix} (v_1,v_2,v_3)&(v_1,v_2,v_3)\\ (v_2,v_1,v_3)&(v_2,v_1,v_3)\\ (v_3,v_2,v_1)&(v_3,v_2,v_1)\\ (v_1,v_3,v_2)&(v_2,v_3,v_1)\\ (v_2,v_1,v_3)&(v_3,v_2,v_1)\\ (v_3,v_2,v_1)&(v_1,v_3,v_2)\\ (v_1,v_3,v_2)&(v_3,v_1,v_2)\\ (v_2,v_1,v_3)&(v_1,v_3,v_2)\\ (v_3,v_2,v_1)&(v_2,v_1,v_3) \end{pmatrix} \leftrightarrow O'=\begin{bmatrix} id&id\\ f_2&f_2\\ f_3&f_3\\ f_3&f_2\\ f_3&f_2\\ f_3&f_3\\ f_3&f_2\\ f_3&f_2\\ f_3&f_3 \end{bmatrix}$$ The reader is encouraged to write out their own example, even if it be just for a single column, and use this as reference later in this section and the next.\\ Now that we have the construction of order-generators, we wish to find a way of detecting repetition in the orderings that are generated. \begin{lemma}\label{replem} Let $A_n$ and $B_n$ be as usual (for a fixed $\Delta_n^i$). If $\phi_n(\begin{pmatrix}\sigma_1\\\vdots\\\sigma_N\end{pmatrix}) = \begin{pmatrix}v_{i_1}\\\vdots\\v_{i_{N}}\end{pmatrix}$, then $$v_{i_j} = v_{i_{j+k}} \Leftrightarrow \sigma_{j+1}\sigma_{j+2}\cdots\sigma_{j+k}(1) = 1.$$ \end{lemma} \begin{proof} Let $\vec{u'} = \begin{pmatrix}\sigma_1\\\vdots\\\sigma_N\end{pmatrix}$ and $\vec{u} = \begin{pmatrix}v_{i_1}\\\vdots\\v_{i_{N}}\end{pmatrix}$ above. Define $\vec{u''} = \varphi(\vec{u'}) = \begin{pmatrix}o_1\\\vdots\\o_N\end{pmatrix}$. By Definition \ref{phidef}, we have that $$o_{j+k} = \sigma_{j+k}\cdot o_{j+k-1} = (\sigma_{j+k-1}\sigma_{j+k})\cdot o_{j+k-2} = \cdots = (\sigma_{j+1}\cdots\sigma_{j+k})\cdot o_j,$$ and that $\pi_1(\vec{u''}) = \vec{u}$ meaning that the first coordinate of $o_j$ is $v_{i_j}$ and the first coordinate of $o_{j+k}$ is $v_{i_{j+k}}$, so that we can conclude $$v_{i_j} = v_{i_{j+k}}\Leftrightarrow \pi^1(o_j) = \pi^1((\sigma_{j+1}\cdots\sigma_{j+k})\cdot o_j)\Leftrightarrow \sigma_{j+1}\cdots\sigma_{j+k}(1) = 1$$ where $\pi^1(v_{s_1},\ldots,v_{s_n}) := v_{s_1}$. \end{proof} In order to rewrite the conditions for a consecutive radio labeling more explicitly for $O'$, we wish to keep track of when a run of instructions in a column of $O'$ maps $1$ to itself, as this corresponds with repetition in that column of $O$ by the previous lemma. \begin{definition} Given an instruction set generator $\Delta_n^i$, define $\Lambda_s^i$ to be the set of all runs of $s$ instructions, starting at position $i$, that map $1$ to itself. That is, $$\Lambda_s^i(\rho_1,\ldots,\rho_{i-1}) := \{\sigma_1\sigma_2\cdots\sigma_s\mid\sigma_k\in\Delta_n^{i+k-1}(\rho_1,\ldots,\rho_{i-1},\sigma_1,\ldots,\sigma_{k-1}),\sigma_1\cdots\sigma_s(1)=1\}.$$ In the presence of multiple $\Delta_{n_k}^i$, we will denote the $\Lambda_s$ corresponding to $\Delta_{n_k}^i$ by $\Lambda_{s,k}^i$. \end{definition} Hence, if one can characterize $\Lambda_s^i$ for a given $\Delta_n^i$, then one can restate the conditions for a consecutive radio labeling easily by combining Lemma \ref{replem} with Proposition \ref{rcmatham}, \begin{theorem}\label{LambdasGeneral} $O'$ generates an ordering that induces a consecutive radio labeling of $G = K_{n_1}\square\cdots\square K_{n_t}$ if and only if there is no repetition in $O:=\Phi_G(O')$, and for all $i<N$ and for every $s<t$, at most $s-1$ values of $j$ result in runs, $\sigma_{i-s+1}^j\cdots \sigma_i^j$, contained in $\Lambda_{s,j}^{i-s+1}(\sigma_1^j,\ldots,\sigma_{i-s}^j)$. \end{theorem} \begin{corollary}\label{Lambdas} $O'$ generates an ordering that induces a consecutive radio labeling of $K_n^t$ if and only if there is no repetition in $O:=\Phi_G(O')$, and for all $i<n^t$ and for every $s<t$, at most $s-1$ values of $j$ result in runs, $\sigma_{i-s+1}^j\cdots \sigma_i^j$, contained in $\Lambda_s^{i-s+1}(\sigma_1^j,\ldots,\sigma_{i-s}^j)$. \end{corollary} \section{Examples and an Ordering of $K_3^4$} We will now consider some simple examples of choices for $\Delta_n^i$ with characterizations of their corresponding $\Lambda_n^i$, and we will conclude with a valid ordering of $K_3^4$ which was previously not known to be radio graceful.\\ For the remainder of this section, if there is ever a place where any element of $\Delta_n$ can be used in a run, then that we will simply use $f$ to denote this. Also, for any $k$, let $\overline{f_k}$ denote any instruction other than $f_k$, i.e. the complement of $\{f_k\}$ in $\Delta_n$ (the relevant instruction set), and let $\overline{f_k^s}$ denote a run of $s$ elements in the complement of $\{f_k\}$ in $\Delta_n$. Lastly, let $f_{<k}$ denote any element in $\{f_2,\ldots,f_{k-1}\}$ and $f_{>k}$ denote any element in $\{f_{k+1},\ldots,f_n\}$. \begin{example} The simplest instruction set generator is the constant function $$\Delta_n^i = \Delta_n = \{f_k = (1k)\mid 2\leq k\leq n\}.$$ In this case, we can determine $\Lambda_s^i$ recursively as the constant function $$\Lambda_s^i = \{xf_k\overline{f_k^l}f_k\mid 2\leq k\leq n,l\geq 0,x\in\Lambda_{s-l-2}^i\}.$$ \end{example} \begin{example}\label{LRU} Another constant function example is $$\Delta_n^i = \Delta_n = \{f_k = (12\cdots k)\mid 2\leq k\leq n\}$$ which I call the Least Recently Used (LRU) instruction set because each element of $\varphi(O')$ orders $V(K_n)$ by least recently used first (where they are 'used' in that column of $\Phi_n(O')$). In this case $\Lambda_s^i$ is a bit more complicated, but can once again be characterized recursively as the constant function $$\Lambda_s^i = \{xff_{>2}f_{<3}^{n_3}f_{>3}f_{<4}^{n_4}f_{>4}\cdots f_{<k}^{n_k}f_k\mid 2\leq k\leq n, x\in\Lambda_{s-(k-1)-\sum_{i=3}^kn_i}^i\}.$$ \end{example} \begin{example}\label{LTU} Yet another example of a constant function example is $$\Delta_n^i = \Delta_n = \{f_2 = (12),f_k = (12k)\mid 3\leq k\leq n\}.$$ This instruction set guarantees that the first two coordinates of each element of $\varphi(O')$ are the most recent two coordinates (which are guaranteed to be different by the constraint from the radio condition on orderings, as seen in $A_n$), and is the simplest instruction set that does so.\\ As usual, we can characterize $\Lambda_s^i$ recursively as the constant function $$\Lambda_s^i = \{xff_2\vert x\in\Lambda_{s-2}^i\}\cup\{xff_k\overline{f_k^{s'}}f_k\vert 3\leq k\leq n,x\in\Lambda_{s-s'-3}^i\}.$$ \end{example} \subsection{Order-Generators as a Generalization of Orderings} This last example will demonstrate that the framework of instruction set generators and their associated order-generators is more general than using orderings. Consider the instruction set generator that only considers the most recent element, $$\Delta_n^i(\sigma_1,\ldots,\sigma_{i-1}) = \Delta_n^i(\sigma_{i-1}) = \{f_{k'} = (1k'),f_k = (1kk')\mid 2\leq k\leq n, k\neq k'\}$$ where $k'$ is such that $\sigma_{i-1} = f_{k'}\in\Delta_n^{i-1}(\sigma_{i-2})$. If we use the naming convention of renaming $id = f_1$, and $f_k' = f_1$, doing this every other time if there are consecutive instructions with the same subscript, then we get that $\phi_n(\begin{pmatrix} v_{i_1}\\\vdots\\v_{i_N} \end{pmatrix}) = \begin{pmatrix} f_{i_1}\\\vdots\\f_{i_N} \end{pmatrix}$. In particular, $$\Lambda_s^i(\sigma_1,\ldots,\sigma_{i-1}) = \Lambda_s^i(\sigma_{i-1}) = \{f^{s-1}f_k\mid k\text{ is such that }\sigma_{i-1} = f_k\}.$$ Therefore, if we look at order generators as just another matrix with integer subscripts that must satisfy some conditions, then this example shows that we can recover our original matrices (orderings) and their constraints in this new framework. \subsection{Labeling $K_3^4$} Fix $\Delta_3^i$ for all columns to be the constant function $\Delta_3^i=\Delta_3 = \{f_2 = (12), f_3 = (123)\}$ (this can be thought of as using $\Delta_n$ from Example \ref{LRU} or from Example \ref{LTU}). As we are only concerned with $\Lambda_s$ for $s<4$ here, we need only consider $\Lambda_2 = \{f_2^2,f_3f_2\}$ and $\Lambda_3 = \{f_2f_3^2,f_3^3\}$. Thus, we can have at most one $f_2$ instruction in every row (after the second), and at most two $f_3$ instructions in the same column as an $f_3$ in the row above. This is because any $f_2$ will necessarily cause either an $f_2^2$ or $f_3f_2$, and any two consecutive $f_3$ instructions will necessarily cause either an $f_2f_3^2$ or an $f_3^3$. Since we cannot have $3$ columns with consecutive $f_3$ instructions, we must have at least (which is equivalent to having exactly) one $f_2$ instruction in every row, and that $f_2$ must be in a different coordinate than the row above it. This is because having four $f_3$ in any row would necessarily create a problem in both the next and previous rows, and we cannot place two consecutive $f_2$ in the same column as this would also result in three runs in $\Lambda_3$. (Note that the result of Corollary \ref{BoundaryForced} is seen to come out here).\\ In this manner, as there are now only four states for every row of $O'$ (corresponding to the position of $f_2$), the problem of generating a consecutive radio labeling for $K_3^4$ has been reduced to choosing a coordinate in which to place $f_2$ in each row, with the restriction that the same coordinate cannot be chosen twice in a row, and that there can be no repetition.\\ It was previously unknown whether $K_3^4$ is radio graceful, but using the reduction presented, a backtrack searching algorithm (that was used to avoid repetition) found many orderings that induce consecutive radio labelings. Here is one of them: $$O=\begin{bmatrix} v_1&v_1&v_1&v_1\\ v_2&v_2&v_2&v_2\\ v_1&v_3&v_3&v_3\\ v_3&v_1&v_1&v_2\\ v_1&v_2&v_2&v_1\\ v_2&v_1&v_3&v_3\\ v_1&v_3&v_1&v_2\\ v_3&v_2&v_2&v_3\\ v_2&v_1&v_1&v_1\\ v_1&v_2&v_3&v_2\\ v_2&v_3&v_2&v_3\\ v_3&v_2&v_1&v_1\\ v_1&v_1&v_3&v_3\\ v_3&v_3&v_2&v_2\\ v_2&v_2&v_1&v_3\\ v_3&v_1&v_3&v_1\\ v_1&v_3&v_2&v_3 \end{bmatrix} \begin{bmatrix} v_3&v_2&v_1&v_2\\ v_2&v_3&v_3&v_1\\ v_1&v_1&v_2&v_2\\ v_3&v_2&v_3&v_3\\ v_1&v_3&v_1&v_1\\ v_2&v_1&v_3&v_2\\ v_1&v_2&v_2&v_3\\ v_3&v_3&v_3&v_1\\ v_2&v_1&v_1&v_3\\ v_3&v_2&v_2&v_2\\ v_1&v_1&v_3&v_1\\ v_3&v_3&v_1&v_3\\ v_2&v_1&v_2&v_2\\ v_1&v_2&v_3&v_3\\ v_3&v_3&v_2&v_1\\ v_2&v_2&v_1&v_2 \end{bmatrix} \begin{bmatrix} v_1&v_1&v_2&v_3\\ v_3&v_2&v_3&v_1\\ v_2&v_3&v_2&v_2\\ v_1&v_2&v_1&v_3\\ v_3&v_1&v_3&v_2\\ v_1&v_3&v_2&v_1\\ v_2&v_2&v_3&v_3\\ v_3&v_1&v_1&v_1\\ v_1&v_3&v_3&v_2\\ v_2&v_2&v_2&v_1\\ v_3&v_1&v_3&v_3\\ v_2&v_3&v_1&v_2\\ v_1&v_2&v_3&v_1\\ v_2&v_1&v_2&v_3\\ v_3&v_3&v_1&v_1\\ v_1&v_2&v_2&v_2 \end{bmatrix} \begin{bmatrix} v_2&v_1&v_3&v_1\\ v_3&v_3&v_2&v_3\\ v_1&v_2&v_1&v_1\\ v_2&v_3&v_3&v_2\\ v_3&v_1&v_2&v_1\\ v_1&v_3&v_1&v_3\\ v_3&v_2&v_3&v_2\\ v_2&v_3&v_2&v_1\\ v_1&v_1&v_1&v_2\\ v_3&v_3&v_3&v_3\\ v_2&v_2&v_1&v_1\\ v_1&v_3&v_2&v_2\\ v_3&v_1&v_1&v_3\\ v_2&v_2&v_3&v_2\\ v_1&v_1&v_2&v_1\\ v_3&v_3&v_1&v_2 \end{bmatrix} \begin{bmatrix} v_2&v_2&v_2&v_3\\ v_1&v_3&v_3&v_1\\ v_2&v_1&v_1&v_2\\ v_3&v_2&v_2&v_1\\ v_2&v_3&v_3&v_3\\ v_1&v_2&v_1&v_2\\ v_3&v_1&v_2&v_3\\ v_2&v_3&v_1&v_1\\ v_1&v_1&v_3&v_2\\ v_3&v_2&v_1&v_3\\ v_2&v_1&v_2&v_1\\ v_3&v_3&v_3&v_2\\ v_1&v_1&v_1&v_3\\ v_2&v_2&v_3&v_1\\ v_3&v_1&v_2&v_2\\ v_2&v_3&v_1&v_3 \end{bmatrix}$$ \section{Future Work} This paper has constructed a framework that can be used to generate orderings for Hamming graphs in such a way that they must satisfy the radio condition. However, there has yet to be found any reasonable method within this framework of weak generators that ensures avoiding repetition i.e. an ordering generated in this way will only satisfy the conditions for a (consecutive) radio labeling up to labeling vertices multiple times. If a clever constraint on $O'$ were found that entailed repetition, then this would provide us with a method for generating radio labelings relatively quickly by simply searching for $O'$ satisfying the constraints of Theorem \ref{LambdasGeneral} and repetition. Additionally, if there was a construction for a potential radio labeling of a Hamming graph, one need only prove that this construction satisfies Theorem \ref{LambdasGeneral} and does not create repetition. \begin{question} Is there an efficient process for generating $O'$ without creating repetition? \end{question} This paper has also taken the first step toward filling the gap in Figure \ref{Fig:gap} but it is still unknown if there is a tighter upper bound than the one presented as Corollary \ref{upperbound}. \begin{question} Is Corollary \ref{upperbound} a tight upper bound? \end{question} \begin{question} Is Theorem \ref{upperboundgeneral} a tight upper bound? \end{question} \begin{question} In light of both Corollary \ref{BoundaryForced} and Corollary \ref{Lambdas}, as well as the process illustrated for labeling $K_3^4$, is there a method for generating valid $O'$ for $K_n^t$ where $t=\frac{n(n^2-1)}{6}$? \end{question} \begin{question} Are there choices of $\Delta_n^i$ that yield $\Lambda_s^i$ with any significant or useful algebraic structure? \end{question} \begin{question} Is there a method to algebraically study consecutive radio labelings of Hamming graphs? \end{question} \bibliographystyle{amsplain}
proofpile-arXiv_059-15748
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} By a \textit{semigroup}, we mean a non-empty set $S$ with an associative binary operation, $(x,y)\mapsto xy$, from $S\times S\to S$. An element $x$ of a semigroup $S$ is said to be \textit{regular} if there exists an element $x'$ in $S$ with $xx'x=x$. If $x'$ satisfies the equation $x'xx'=x'$ also, then $x'$ is called a \textit{generalized inverse} of $x$. It is not difficult to show that every regular element has a generalized inverse, for if $x'$ satisfies the equation of regularity, then $x''=x'xx'$ satisfies both equations for generalized inverses. A semigroup in which all the elements are regular is called a regular semigroup. A ring $R$ is said to be a \emph{regular ring} if its multiplicative semigroup is regular. Any regular semigroup has a rich supply of \textit{idempotents}, that is, elements $e$ for which $e^2=e$. In \cite{kss}, the set of idempotents of a regular semigroups is given an abstract characterization as a partial algebra with two quasi-orders, which is termed a \textit{regular biordered set}. We give here a few essential notions of biordered sets. Details can be found in \cite{kss}. Let $E$ be a non-empty set in which a partial binary operation is defined. (This means the product $ef$ is defined only for certain pairs $e, f$ of elements of $E$.) We define two relations $\omr$ and $\oml$ on $E$ by \begin{equation*} \oml=\{(e,f)\in E\times E\colon ef=e\} \quad\text{and}\quad \omr=\{(e,f)\in E\times E\colon fe=e\} \end{equation*} One of the axioms of a biordered set is that these relations are \textit{quasi-orders}, that is, they are reflexive and transitive. Note that this means the relation $\om$ defined by \begin{equation*} \om=\oml\medcap\omr \end{equation*} is a partial order on $E$. For $e$ in $E$, we define \begin{equation*} \oml(e)=\{f\in E\colon f\rel\oml e\} \quad\text{and}\quad \omr(e)=\{f\in E\colon f\rel\omr e\} \end{equation*} And for $e$, $f$ in $E$, we define \begin{equation*} \mset ef=\oml(e)\medcap\omr(e) \end{equation*} Also, for $e$, $f$ in $E$, we define the \textit{sandwich set} of $e$ and $f$ by \begin{equation*} \swset ef= \{h\in\mset ef\colon g\preceq h\;\;\text{for all}\;\;g\in\mset ef\} \end{equation*} where $\preceq$ is the quasi-order defined by \begin{equation*} g\preceq h\iff eg\rel\omr eh, gf\rel\oml hf \end{equation*} The regularity condition on a biordered set is that \begin{equation*} \swset ef\ne\emptyset \;\;\text{for all $e$ and $f$ in $E$} \end{equation*} We first see how certain properties of the idempotents in a regular ring can be formulated in biorder terms and then later show that these properties actually characterize the biordered set of a ring of matrices. \section{Idempotents in a regular ring} Let $R$ be a ring with unity and let $E$ be the set of idempotents of $R$. It is easily seen that if $e$ is an idempotent in $R$, then $1-e$ is also an idempotent in $R$. Thus if we denote $1-e$ by $\annid e$, then we have a map $e\mapsto\annid e$ with $\dannid e=e$. We first prove some elementary properties of this map in terms of the biorder relations of $E$. \begin{prop}\label{annid} Let $E$ be the set of idempotents of a ring with unity and for each $e$ in $E$, let $\annid e=1-e$. Then for $e$, $f$ in $E$, we have the following: \begin{mathenum} \item $f\rel\oml e$ if and only if $\annid e\rel\omr\annid f$ \item $f\rel\oml\annid e$ if and only if $fe=0$ \end{mathenum} \end{prop} \begin{proof} If $f\rel\oml e$, then by definition of $\oml$, we have $fe=f$, so that \begin{equation*} \annid f\annid e=(1-f)(1-e)=1-e-f+fe=1-e=\annid e \end{equation*} and so $\annid e\rel\omr\annid f$. Conversely, if $\annid e\rel\omr\annid f$, then $\annid f\annid e=\annid e$, so that \begin{equation*} fe=(1-\annid f)(1-\annid e) =1-\annid e-\annid f+\annid f\annid e =1-\annid f=f \end{equation*} and so $f\rel\oml e$. This proves (i). To prove (ii), first let $f\rel\oml\annid e$. Then $f\annid e=f$, so that \begin{equation*} fe=f(1-\annid e)=f-f\annid e=0 \end{equation*} Conversely, if $fe=0$, then \begin{equation*} f\annid e=f(1-e)=f-fe=f \end{equation*} so that $f\rel\oml\annid e$ \end{proof} The condition $fe=0$ can also be formulated in biorder terms in any regular semigroup with a zero element, using the idea of the $\mathsf{M}$-set defined earlier. \begin{lem}\label{prod0} Let $S$ be a regular semigroup with zero and let $e$ and $f$ be idempotents in $S$. Then $ef=0$ if and only if\/ $\mset ef=\{0\}$. \end{lem} \begin{proof} First suppose that $ef=0$ and let $g\in\mset ef$. Then by definition, $g$ is an idempotent in $S$ with $ge=g=fg$ so that \begin{equation*} g=g^2=(ge)(fg)=g(ef)g=0 \end{equation*} since $ef=0$. Conversely, suppose $\mset ef=\{0\}$. Since $S$ is regular, the element $ef$ in $S$ has a generalized inverse $x$ in $S$. Let $g=fxe$. Then \begin{equation*} g^2=f(xefx)e=fxe=g \end{equation*} so that $g$ is an idempotent. Also, $ge=g=fg$ so that $g\in\mset ef$. Hence $g=0$ and so \begin{equation*} ef=(ef)x(ef)=e(fxe)f=egf=0 \end{equation*} This completes the proof. \end{proof} Using this, the second part of Proposition~\ref{annid} can be reformulated as follows: \begin{cor} For idempotents $e$, $f$ in a regular ring with unity, $f\rel\oml\annid e$ if and only if $\mset ef=\{0\}$ \end{cor} We next note that if $e$ and $f$ are idempotents in a ring with $ef=fe=0$, then $e+f$ is also an idempotent. This property can be characterized in biorder terms. We first note that the conditions $ef=fe=0$ are equivalent to the conditions $e\rel\oml\annid f$ and $f\rel\oml\annid e$, by Proposition~\ref{annid}(ii) and the relation $e\rel\oml\annid f$ is equivalent to $f\rel\omr\annid e$, by Proposition~\ref{annid}(i). It follows that $ef=fe=0$ iff $f\rel\om\annid e$. Also, if this condition holds, then the idempotent $e+f$ can be characterized in terms of sandwich sets. For this, we make use of the fact that if $E$ is the biordered set of idempotents of a regular semigroup $S$, then \begin{equation*} \swset ef=\{h\in E\colon fhe=h\;\;\text{and}\;\;ehf=ef\;\;\text{in}\;\;S\} \end{equation*} (cf.Theorem 1.1 of \cite{kss}). \begin{prop} Let $E$ be the biordered set of idempotents of a regular ring with unity and for each $e$ in $E$, let $\annid e=1-e$. For $e$, $f$ in $E$ if $f\rel\om\annid e$, then there is a unique idempotent in $E$ which belongs to both the sandwich sets $\swset{\annid e}{\annid f}$ and $\swset{\annid f}{\annid e}$ \end{prop} \begin{proof} Let $f\rel\om\annid e$. Then as noted above, we have $ef=fe=0$. Hence \begin{equation*} (e+f)^2=e+f+ef+fe=e+f \end{equation*} so that $e+f$ is in $E$. Let $h=1-(e+f)$ so that $h$ is also in $E$. Now \begin{equation*} \annid e\annid f=(1-e)(1-f)=1-e-f=h \end{equation*} since $ef=0$ and similarly \begin{equation*} \annid f\annid e=h \end{equation*} since $fe=0$. Hence \begin{equation*} \annid fh\annid e=\annid f(\annid f\annid e)\annid e =\annid f\annid e=h \end{equation*} since $\annid e$ and $\annid f$ are idempotents. Again, \begin{equation*} \annid eh\annid f=\annid e(\annid e\annid f)\annid f=\annid e\annid f \end{equation*} Since $E$ is the biordered set of the idempotents of the multiplicative semigroup of $R$, which is regular, it follows from the comments preceding the result that $h$ is in $\swset{\annid e}{\annid f}$. Similar computations show that $h$ is also in $\swset{\annid f}{\annid e}$. To prove uniqueness, let $g$ be an element in $E$ belonging to both these sandwich sets. Then \begin{equation*} \annid eg\annid f=\annid e\annid f \end{equation*} since $g$ is in $\swset{\annid e}{\annid f}$ and \begin{equation*} \annid eg\annid f=g \end{equation*} since $g$ is in $\swset{\annid f}{\annid e}$. Hence $g=\annid e\annid f=h$. \end{proof} Another property of the biordered set of a regular ring is linked to the ideal theory of regular rings. It is well known that in a regular ring, every principal left or right ideal is generated by an idempotent, and that the set of principal left ideals and the set of principal right deals of a regular ring with unity form dually isomorphic complemented modular lattices (see \cite{vn}, \cite{bj}). We next show that under certain conditions discussed above, a biordered set can be realized as the biordered set of a regular semigroup whose principal left ideals and principal right ideals form dually isomorphic complemented modular lattices. \section{Strongly regular Baer semigroups} We start by observing that in any semigroup $S$ with zero, we can define \textit{left annihilator} of an element $s$ by \begin{equation*} \lann s=\{x\in S\colon xs=0\} \end{equation*} and the \textit{right annihilator} of $s$ by \begin{equation*} \rann s=\{x\in S\colon sx=0\} \end{equation*} The semigroup $S$ is said to be a \textit{strongly regular Baer semigroup} if the set of left annihilators of elements of $S$ is equal to the set of principal left ideals of $S$ and the set of right annihilators is equal to the set of principal right ideals of $S$. It can be shown that the multiplicative semigroup of a regular ring with unity is a strongly regular Baer semigroup (\cite{bj}). Also, a strongly regular Baer semigroup is a regular semigroup, in the sense defined earlier. We now show that if a biordered set satisfies some of the properties discussed in the previous section, then it can be realized as the biordered set of a strongly regular Baer semigroup. \begin{thm}\label{srbsg} Let $E$ be a regular biordered set with the following properties. \renewcommand{\theenumi}{{\normalfont(\textsf{E\arabic{enumi}})}} \renewcommand{\labelenumi}{\theenumi} \renewcommand{\theenumii}{{\normalfont(\roman{enumii})}} \renewcommand{\labelenumii}{\theenumii} \begin{enumerate} \item There exists an element $0$ in $E$ such that $0\rel\om e$ for each $e$ in $E$\label{e1} \item There exists a map $e\mapsto\annid e$ satisfying the following conditions:\label{e2} \begin{enumerate} \item $\dannid e=e$ for each $e$ in $E$\label{e21} \item $f\rel\oml e$ if and only if $\annid e\rel\omr\annid f$ for $e$, $f$ in $E$\label{e22} \item $f\rel\oml\annid e$ if and only if $\mset fe=\{0\}$ for $e$, $f$ in $E$\label{e23} \end{enumerate} \end{enumerate} Then there exists a strongly regular Baer semigroup $S$ such that $E$ is the biordered set of idempotents of $S$. \end{thm} To prove this result, we make use of a couple of lemmas. First we show that in any biordered set satisfying the above conditions, the duals of these conditions also hold. \begin{lem}\label{dual} Let $E$ be a biordered set satisfying \ref{e1} and \ref{e2}. Then $E$ satisfies the following conditions also: \begin{mathenum} \item there exists $1$ in $E$ such that $e\rel\om1$ for each $e$ in $E$. \item $f\rel\omr e$ if and only if $\annid e\rel\oml\annid f$, for $e$, $f$ in $E$ \item $f\rel\omr\annid e$ if and only if $\mset ef=\{0\}$, for $e$, $f$ in $E$ \end{mathenum} \end{lem} \begin{proof} We first prove (ii). Let $e$ and $f$ be elements of $E$ with $f\rel\omr e$. By \ref{e21}, we have $e=\dannid e$ and $f=\dannid f$, so that we have $\dannid f\rel\omr\dannid e$. By \ref{e22}, this gives $\annid e\rel\oml\annid f$. On the other hand if we have $\annid e\rel\oml\annid f$, then from \ref{e22} we get $\dannid f\rel\omr\dannid e$ which gives $f\rel\omr e$, by \ref{e21}. Now to prove (i), let $1=\annid 0$. Then for each $e$ in $E$, since $0\rel\oml\annid e$ by \ref{e1}, we have $\dannid e\rel\omr\annid 0=1$, by \ref{e22}, so that $e\rel\omr 1$, using \ref{e21}. Again, since $0\rel\omr\annid e$ by \ref{e1}, we have $\dannid e\rel\oml\annid0$, by what we have proved above and so $e\rel\oml1$. Thus $e\rel\om1$. To prove (iii), first let $e$ and $f$ be elements of $E$ with $f\rel\omr\annid e$. Then by what we have proved above, we get $\dannid e\rel\oml\annid f$ and hence $e\rel\oml\annid f$, using \ref{e21}. By \ref{e23}, this gives $\mset ef=\{0\}$. Conversely suppose $e$ and $f$ are elements of $E$ with $\mset ef=\{0\}$. Then from \ref{e23}, we get $e\rel\oml\annid f$ and hence $\dannid f\rel\omr\annid e$, from \ref{e22}; that is, $f\rel\omr\annid e$, using \ref{e21}. \end{proof} Now if $E$ is a regular biordered set, then there exists a regular semigroup $S$ with $E$ as its set of idempotents and which is idempotent generated, in the sense that every element of $S$ is a product of elements of $E$ (see Section 6 of \cite{kss}). It is easy to see that if $E$ is a regular biordered set satisfying (E1) and (E2), then 0 is the zero and 1 is the identity of every idempotent generated regular semigroup with $E$ as its set of idempotents. We next show that for each $e$ in $E$, the generator of the annihilators of $e$ in such a semigroup is $\annid e$. \begin{lem}\label{ann} Let $E$ be a regular biordered set satisfying \ref{e1} and \ref{e2} and $S$ be a regular idempotent generated semigroup with $E$ as its biordered set of idempotents. Then for each $e$ in $E$, we have $\lann e=S\annid e$ and $\rann e=\annid eS$. \end{lem} \begin{proof} First note that since $S$ is idempotent generated, the element $0$ of $E$ is the zero of $S$. Let $e$ be an element of $E$ and let $x$ be an element of $S$ which belongs to $\lann e$ so that $xe=0$. Since $S$ is regular, there exists $x'$ in $S$ with $xx'x=x$. Let $f=x'x$ so that $f$ is an element of $E$ with \begin{equation*} xf=xx'x=x \end{equation*} Now \begin{equation*} fe=(x'x)e=x'(xe)=0 \end{equation*} so that $\mset fe=\{0\}$, by Lemma~\ref{prod0}. Hence $f\rel\oml\annid e$, by \ref{e23}. So, $f\annid e=f$, by definition of $\oml$. This gives \begin{equation*} x\annid e=(xf)\annid e=x(f\annid e)=xf=x \end{equation*} Thus $x=x\annid e\in S\annid e$. It follows that $\lann e\subseteq S\annid e$. To prove the reverse inclusion, let $x\in S\annid e$ so that $x\annid e=x$. Let $f$ be defined as before. Then \begin{equation*} f\annid e=(x'x)\annid e=x'(x\annid e)=x'x=f \end{equation*} so that $f\rel\oml\annid e$, by definition and so $\mset fe=\{0\}$, by \ref{e23}. Hence $fe=0$, by Lemma~\ref{prod0}, so that \begin{equation*} xe=(xf)e=x(fe)=0 \end{equation*} Thus $x\in\lann e$ and it follows that $S\annid e\subseteq\lann e$. So $S\annid e=\lann e$. A dual argument using Lemma~\ref{dual} proves the result for right annihilators. \end{proof} Now we can prove our theorem. {\renewcommand{\proofname}{\textsc{Proof of the Theorem}} \begin{proof} Since $E$ is a regular biordered set, there exists a regular idempotent generated semigroup $S$ with $E$ as the biordered set of idempotents, as noted earlier. We will show that $S$ is a strongly regular Baer semigroup. To show that the left annihilator of each element is a principal left ideal in $S$, let $x$ be an element of $S$ and consider $\lann x$. Since $S$ is regular, there exists $x'$ in $S$ with $xx'x=x$. Let $e=xx'$ so that $e$ is an element of $S$ with $ex=x$. We can show that $\lann x=\lann e$. For if $y\in\lann x$, then $yx=0$ so that \begin{equation*} ye=y(xx')=(yx)x'=0 \end{equation*} and so $y\in\lann e$; on the other hand, if $y\in\lann e$, so that $ye=0$, then \begin{equation*} yx=y(ex)=(ye)x=0 \end{equation*} and so $y\in\lann x$. Thus $\lann x=\lann e$ and by Lemma~\ref{ann}, we have $\lann e=S\annid e$. So, $\lann x=S\annid e$. On the other hand, we can show that every principal left ideal in $S$ is the left annihilator of an element in $S$. Let $x$ be an element of $S$ and let $x'$ be an element of $S$ with $xx'x=x$. Then $e=x'x$ is an idempotent with \begin{equation*} Se=Sx'x\subseteq Sx \quad\text{and}\quad Sx=Sxx'x\subseteq Sx'x=Se \end{equation*} so that $Sx=Se$. Now by \ref{e21}, we have $\dannid e=e$, so that $Se=S\dannid e$. Also, by Lemma~\ref{ann}, we have $S\dannid e=\lann{\annid e}$. Thus \begin{equation*} Sx=Se=S\dannid e=\lann{\annid e} \end{equation*} A dual argument proves the corresponding results for principal right ideals and right annihilators. Hence, by definition, $S$ is a strongly regular Baer semigroup. \end{proof}} Now the set of principal left ideals and the set of principal right ideals of a strongly regular Baer semigroup can be shown to be complemented modular lattices which are dually isomorphic (see \cite{bj}). Also, the partially ordered set of principal left ideals and the partially ordered set of principal right ideals of any regular semigroup are isomorphic to the quotients of its biordered set by certain equivalence relations, as indicated below. Let $S$ be a regular semigroup and let $E$ be the biordered set of its idempotents. We define the relations $\gle$ and $\gre$ on $E$ by \begin{equation*} \gle=\oml\medcap\,(\oml)^{-1} \quad\text{and}\quad \gre=\omr\medcap\,(\omr) ^{-1} \end{equation*} It is easily seen that the relations $\gle$ and $\gre$ are equivalences on $E$ and hence partition $E$. For each $e$ in $E$, we denote the $\gle$-class containing $e$ by $\gle(e)$ and the $\gre$-class containing $e$ by $\gre(e)$. The set of all $\gle$-classes is denoted by $E/\gle$ and the set of all $\gre$-classes by $E/\gre$. Now for $e$ and $f$ in $E$, if $e\rel\oml f$ and $e'\in\gle(e)$ and $f'\in\gle(f)$, then $e'\rel\oml e\rel\oml f\rel\oml f'$, so that $e'\rel\oml f'$, since $\oml$ is transitive. Hence we can unambiguously define a relation $\le$ on $E/\gle$ by \begin{equation*} \gle(e)\le\gle(f)\;\;\text{if and only if}\;\;e\rel\oml f \end{equation*} It is not difficult to see that this relation is a partial order on $E/\gle$. Also, it can be easily seen that for $e$ and $f$ in $E$, we have $e\rel\oml f$ if and only if $Se\subseteq Sf$ and so $\gle(e)\le\gle(f)$ if and only if $Se\subseteq Sf$. Thus the partially ordered set $E/\gle$ is isomorphic with the partially ordered set of principal left ideals of $S$. Similarly, we can define a partial order $\le$ on the set $E/\gre$ of $\gre$-classes in $E$ by \begin{equation*} \gre(e)\le\gre(f)\;\;\text{if and only if}\;\;e\rel\omr f \end{equation*} and this partially ordered set is isomorphic with the partially ordered set of principal right ideals of $S$. Thus if $E$ is the biordered set of idempotents of a strongly regular Baer semigroup, then the quotients $E/\gle$ and $E/\gre$ are complemented modular lattices and they are dually isomorphic. In the following, we denote these lattices by $\llat E$ and $\rlat E$ respectively. So, from Theorem~\ref{srbsg}, we get the following \begin{cor}\label{llatcm} Let $E$ be a regular biordered set satisfying \ref{e1} and \ref{e2} of Theorem~\ref{srbsg}. Then $\llat E$ and $\rlat E$ are dually isomorphic complemented modular lattices.\qed \end{cor} We also note the following result on complements in $E/\gle$ \begin{cor}\label{llatcompl} Let $E$ be a regular biordered set satisfying \ref{e1} and \ref{e2} of Theorem~\ref{srbsg}. Then for each $e$ in $E$, the sets $\gle(e)$ and $\gle(\annid e)$ are complements of each other in the lattice $\llat E$. \end{cor} \begin{proof} By Theorem~\ref{srbsg}, there exists a strongly regular Baer semigroup $S$ with $E$ as its biordered set of idempotents. Let $e$ be an element of $E$. We will show that $Se$ and $S\annid e$ are complements of each other in the lattice of principal left ideals of $S$. Let $x\in Se\medcap S\annid e$. Then $xe=x$, since $x\in Se$. Also, $x\in S\annid e$ and by Lemma~\ref{ann}, we have $S\annid e=\lann e$, so that $xe=0$. Thus $x=xe=0$ and it follows that $Se\medcap S\annid e=\{0\}$. To show that $Se\medvee S\annid e=S$, suppose that $Se\medvee S\annid e=Sf$, so that $Se\subseteq Sf$ and $S\annid e\subseteq Sf$, which means $e\rel\oml f$ and $\annid e\rel\oml f$. Since $e\rel\oml f$, we have $\annid f\rel\omr\annid e$, so that $\annid e\annid f=\annid f$ and since $\annid e\rel\oml f$ we have $\annid f\rel\omr \dannid e=e$, so that $e\annid f=\annid f$. Hence \begin{equation*} \annid f=\annid e\annid f=\annid e(e\annid f) =(\annid ee)\annid f=0 \end{equation*} and so $f=\dannid f=\annid 0=1$. Since $1$ is the identity of $S$, we have $S1=S$. Thus $Se\medvee S\annid e=Sf=S1=S$. \end{proof} Since the multiplicative semigroup of a regular ring with unity is a strongly regular Baer semigroup, these results holds in particular for biordered sets of regular rings. Now in \cite{vn}, it is shown that if $L$ is a complemented modular lattice satisfying certain conditions, then it can be realized as the lattice of principal left ideals of a matrix ring over a regular ring. To translate these conditions into biorder terms, we take a look at the idempotents in such a ring. \section{Idempotents in matrix rings} In \cite{vn}, it is shown that if $R$ is a regular ring, then for every natural number $n$, the ring $R_n$ of $n\times n$ matrices over $R$ is also a regular ring. In this section, we look at some peculiarities of the biordered set of $R_n$. We first note that this ring contains a special class of idempotents. For each $i=1,2,\dotsc,n$ we define $\matre i$ to be the $n\times n$ matrix with a single 1 at the $i^\text{th}$ row and $i^\text{th}$ column and 0's elsewhere. It easily follows from the usual rules of matrix multiplication that $\matre i$ is an idempotent for each $i$. Also, $\matre i\matre j=\matr0$ for $i\ne j$ and $\matre1+\matre2+\dotsb+\matre n=\matr I$, where $\matr0$ is the $n\times n$ zero matrix and $\matr I$ is the $n\times n$ identity matrix. We now see how the condition on the sum of these idempotents can be translated into biorder terms. \begin{prop} Let $e_1, e_2,\dotsc,e_n$ be idempotents in a regular ring $R$ with unity such that $e_ie_j=0$ for $i\ne j$. Then the following are equivalent \begin{mathenum} \item $e_1+e_2+\dotsb+e_n=1$ \item If $e$ is an idempotent in $R$ such that $e_i\rel\om e$ for each $i=1,2,\dotsc,n$, then $e=1$ \end{mathenum} \end{prop} \begin{proof} First suppose that (i) holds and suppose that $e$ is an idempotent in $R$ with $e_i\rel\om e$ for $i=1,2,\dotsc,n$. Then $ee_i=e_i$ for each $i=1,2,\dotsc,n$, so that \begin{equation*} e=e1=e(e_1+e_2+\dotsb+e_n)=e_1+e_2+\dotsb+e_n=1 \end{equation*} which gives (ii). Conversely, suppose that (ii) holds and let $e=e_1+e_2+\dotsb+e_n$. Then \begin{equation*} e^2=(e_1+e_2+\dotsb+e_n)(e_1+e_2+\dotsb+e_n)=e_1+e_2+\dotsb+e_n=e \end{equation*} since each $e_i$ is an idempotent and $e_ie_j=0$ for $i\ne j$. Thus $e$ is an idempotent. Moreover for each $e_i$ \begin{equation*} e_ie=e_i(e_1+e_2+\dotsb+e_n)=e_i \end{equation*} and similarly, $ee_i=e_i$. Thus $e_i\rel\om e$ for each $i$ and so $e=1$, by (ii). \end{proof} This discussion, together with Lemma~\ref{prod0}, gives the following \begin{prop} Let $R$ be a regular ring with unity and let $R_n$ be the ring of $n\times n$ matrices over $R$. Then there exists idempotents $\matre1, \matre2,\dotsc\matre n$ in $R_n$ such that \begin{mathenum} \item $\mset{\matre i}{\matre j}=\{\matr 0\}$, for $i\ne j$ \item if $\matre{}$ is an idempotent in $R_n$ such that $\matre i\rel\om\matre{}$ for each $i=1,2,\dotsc,n$, then $\matre{}=\matr I$ \qed \end{mathenum} \end{prop} Another property of these idempotents is that any pair of them generate principal left ideals which have a common complement (see the proof of Theorem 3.3, Part II,\cite{vn}). To describe this property in biorder terms, we introduce some terminology from \cite{kss} in a slightly modified form. Let $e$ and $f$ be idempotents in a biordered set $E$. As in \cite{kss}, by an $E$-sequence from $e$ to $f$, we mean a finite sequence $e_0=e,e_1,e_2,\dotsc,e_{n-1},e_n=f$ of elements of $E$ such that $e_{i-1}(\gle\medcup\gre)e_i$ for $i=1,2,\dotsc,n$ and in this case, $n$ is called the length of the $E$-sequence. If there exists an $E$-sequence from $e$ to $f$, we define $d(e,f)$ to be the length of the shortest $E$-sequence from $e$ to $f$; also we define $d(e,e)=1$. If there is no $E$-sequence form $e$ to $f$,we define $d(e,f)=0$. For our purposes, we will have to distinguish between $E$-sequences starting with $\gle$ and those starting with $\gre$. For idempotents $e$ and $f$, we define $d_l(e,f)$ to be the length of the shortest $E$-sequence from $e$ to $f$, which start with the $\gle$ relation and $d_r(e,f)$ to be the length of the shortest $E$-sequence from $e$ to $f$ which start with the $\gre$ relation. The condition for two principal left ideals of a ring to have a common complement can be described in terms of the $d_l$ function as follows. Following \cite{vn}, two elements of a lattice which have a common complement are said to be \emph{in perspective}. \begin{prop}\label{idpersp} Let $E$ be the set of idempotents of a regular ring $R$ and $e$ and $f$ be elements of $E$. Then $\gle(e)$ and $\gle(f)$ are in perspective in $\llat E$ if and only if $1\le d_l(e,f)\le3$. \end{prop} \begin{proof} First suppose that $\gle(e)$ and $\gle(f)$ are in perspective and let $\gle(g)$ be a common complement of $\gle(e)$ and $\gle(f)$ in $\llat E$. Since $\gle(e)$ and $\gle(g)$ are complements of each other in $\llat E$, there exists $h$ in $E$ with $Rh=Re$ and $R(1-h)=Rg$ (see \cite{vn}, Part II, Theorem~2.1) so that \begin{equation*} \gle(h)=\gle(e) \quad\text{and}\quad \gle(1-h)=\gle(g) \end{equation*} Again, since $\gle(f)$ and $\gle(g)$ are complements of each other, there exists $k$ in $E$ \begin{equation*} \gle(k)=\gle(f) \quad\text{and}\quad \gle(1-k)=\gle(g) \end{equation*} Now since $\gle(e)=\gle(h)$, we have $e\rel\gle h$ and since $\gle(k)=\gle(f)$, we have $k\rel\gle f$. Also, we have $\gle(1-h)=\gle(g)=\gle(1-k)$ so that $1-h\rel\gle 1-k$ and hence $h\rel\gre k$, by definition of the $\gle$-relation and Proposition~\ref{annid}. It follows from the definition of $d_l$ that $1\le d_l(e,f)\le3$. Conversely, suppose $e$ and $f$ are elements of $E$ with $1\le d_l(e,f)\le 3$. Then there exist $g$ and $h$ in $E$ with $e\rel\gle g\rel\gre h\rel\gle f$ (where some of the elements may be equal). Since $e\rel\gle g$, we have $\gle(e)=\gle(g)$ and so $\gle(1-g)$ is a complement of $\gle(g)=\gle(e)$. Also, from $g\rel\gre h$, we have $1-g\rel\gle 1-h$ so that $\gle(1-g)=\gle(1-h)$ and so $\gle(1-g)$ is a complement of $\gle(h)$. Moreover, from $h\rel\gle f$, we have $\gle(h)=\gle(f)$. Hence $\gle(1-g)$ is a complement of $\gle(h)=\gle(f)$. Thus $\gle(1-g)$ is a complement of both $\gle(e)$ and $\gle(f)$. \end{proof} We next show that any regular biordered set satisfying some of the conditions discussed so far can be realized as the set of idempotents of a ring of matrices over a regular ring. \section{Biordered sets of matrix rings} In this section, we prove our main result: \begin{thm}\label{mr} Let $E$ be a regular biordered set satisfying the following properties. \renewcommand{\theenumi}{{\normalfont(\textsf{E\arabic{enumi}})}} \renewcommand{\labelenumi}{\theenumi} \renewcommand{\theenumii}{{\normalfont(\roman{enumii})}} \renewcommand{\labelenumii}{\theenumii} \begin{enumerate} \item There exists an element $0$ in $E$ such that $0\rel\om e$ for each $e$ in $E$ \item There exists a map $e\mapsto\annid e$ satisfying the following conditions: \begin{enumerate} \item $\dannid e=e$ for each $e$ in $E$ \item $f\rel\oml e$ if and only if $\annid e\rel\omr\annid f$ for $e$, $f$ in $E$ \item $f\rel\oml\annid e$ if and only if $\mset fe=\{0\}$ for $e$, $f$ in $E$ \end{enumerate} \item If $f\rel\om\annid e$, then $\swset{\annid e}{\annid f}\medcap\swset{\annid f}{\annid e}\ne\emptyset$ \label{e3} \item There exists idempotents $e_1,e_2,\dotsc,e_n$ in $E$, where $n\ge4$, satisfying the following conditions:\label{e4} \begin{enumerate} \item \mset{e_i}{e_j}=\{0\}, for $i\ne j$\label{e41} \item if $e$ is in $E$ with $e_i\rel\om e$ for $i=1,2,\dotsc,n$, then $e=1$\label{e42} \item $d_l(e_i,e_j)=3$, for $i\ne j$\label{e43} \end{enumerate} \end{enumerate} Then there exists a regular ring $R$ with the biordered set of idempotents of the ring $R_n$ of $n\times n$ matrices over $R$ isomorphic with $E$. \end{thm} To prove this result, we first look at some consequences of these conditions. We start by noting that in the case of a biordered set satisfying \ref{e1}, \ref{e2} and \ref{e3}, there is exactly one element in $\swset{\annid e}{\annid f}\medcap\swset{\annid f}{\annid e}$. \begin{prop}\label{oplus} Let $E$ be a regular biordered set satisfying \ref{e1}, \ref{e2} and \ref{e3}. If $f\rel\om \annid e$ then there is a unique element in $E$ belonging to $\swset{\annid e}{\annid f}\medcap\swset{\annid f}{\annid e}$. \end{prop} \begin{proof} By Theorem~\ref{srbsg}, there exists a regular semigroup $S$ with its biordered set of idempotents equal to $E$. Since $S$ is regular, we have \begin{equation*} \swset ef=\{h\in E\colon ehf=ef\;\;\text{and}\;\;fhe=h\} \end{equation*} (see \cite{kss}, Theorem 1.1). Let $h\in\swset{\annid e}{\annid f}\medcap\swset{\annid f}{\annid e}$. Then $h\in \swset{\annid e}{\annid f}$, so that \begin{equation*} \annid e h\annid f=\annid e\annid f \end{equation*} Also, $h\in\swset{\annid f}{\annid e}$, so that \begin{equation*} \annid e h=h\annid f=h \end{equation*} Hence \begin{equation*} h=h^2=(\annid eh)(h\annid f)=\annid eh\annid f=\annid e\annid f \end{equation*} Thus the only element in $\swset{\annid e}{\annid f}\medcap\swset{\annid f}{\annid e}$ is $\annid e\annid f$. \end{proof} In the following, for $e$ and $f$ in a biordered set satisfying \ref{e1}, \ref{e2} and \ref{e3}, if $h$ is the unique element of $\swset{\annid e}{\annid f}\medcap\swset{\annid f}{\annid e}$, then we denote $\annid h$ by $e\oplus f$. The next result gives an alternate characterization of $e\oplus f$. \begin{prop}\label{sumidalt} Let $e$ and $f$ be elements of a regular biordered set satisfying \ref{e1}, \ref{e2} and \ref{e3} with $f\rel\om\annid e$ and let $h=e\oplus f$. Then $h$ satisfies the following conditions. \begin{mathenum} \item $e\rel\om h$ and $f\rel\om h$ \item if $g$ is in $E$ with $e\rel\oml g$ and $f\rel\oml g$, then $h\rel\oml g$ \item if $g$ is in $E$ with $e\rel\omr g$ and $f\rel\omr g$, then $h\rel\omr g$ \end{mathenum} Moreover, these properties characterize $h$. \end{prop} \begin{proof} To prove (i), note that $\annid{h}\in \swset{\annid{e}}{\annid{f}}\medcap\swset{\annid{f}}{\annid{e}}$, by definition. Since $\annid h\in\swset{\annid e}{\annid f}$, we have $\annid h\rel\oml\annid e$ and $\annid h\rel\omr\annid f$, so that $e\rel\omr h$ and $f\rel\oml h$. Similarly, since $\annid h\in\swset{\annid f}{\annid e}$, we have $e\rel\oml h$ and $f\rel\omr h$. Thus $e\rel\om h$ and $f\rel\om h$. To prove (ii), let $g\in E$ with $e\rel\oml g$ and $f\rel\oml g$. Then $\annid g\rel\omr \annid e$ and $\annid g\rel\omr\annid f$. Hence \begin{equation*} \annid e\annid g=\annid g \quad\text{and}\quad \annid f\annid g=\annid g \end{equation*} Let $S$ be a regular semigroup with its biordered set of idempotents equal to $E$. Then as seen in the previous result, we have $\annid h=\annid e\annid f$. Hence \begin{equation*} \annid h\annid g=(\annid e\annid f)\annid g =\annid e(\annid f\annid g) =\annid e\annid g =\annid g \end{equation*} so that $\annid g\rel\omr\annid h$ and so $h\rel\oml g$. This proves (ii). A dual argument establishes (iii). To prove uniqueness of $h$, suppose $h$ and $h'$ are elements of $E$ satisfying these conditions. Then $e\rel\oml h'$ and $f\rel\oml h'$, so that $h\rel\oml h'$. Similarly $h\rel\omr h'$ so that $h\rel\om h'$. Interchanging the roles of $h$ and $h'$, we also have $h'\rel\om h$. Thus $h'=h$. \end{proof} Now let $e$ and $f$ be elements of $E$ with $f\rel\om\annid e$, so that we have $e\oplus f$ in $E$. Let $h=e\oplus f$. Then by the above result, $e\rel\oml h$ and $f\rel\oml h$, so that in the lattice $\llat E =E/\gle$, we have $\gle(e)\le\gle(h)$ and $\gle(f)\le\gle(h)$. Also, if $g\in E$ with $\gle(e)\le\gle(g)$ and $\gle(f)\le\gle(g)$, then $e\rel\oml g$ and $f\rel\oml g$, so that $h\rel\oml g$ so that $\gle(h)\le\gle(g)$. It follows that $\gle(e)\medvee\gle(f)=\gle(h)$. Also, in this case, $\gle(e)\medcap\gle(f)=\{0\}$. For suppose $g\in\gle(e)\medcap\gle(f)$ so that $ge=g$ and $gf=g$, and so in any regular idempotent generated semigroup $S$ with $E$ as the biordered set of idempotents, \begin{equation*} g=ge=(gf)e=g(fe) \end{equation*} Also since $f\rel\oml\annid e$, we have $\mset{f}{e}=\{0\}$ and so $fe=0$, by Lemma~\ref{prod0}. Hence $g=g(fe)=0$. Thus we have the result below: \begin{prop}\label{sumidlat} Let $E$ be a regular biordered set satisfying \ref{e1}, \ref{e2} and \ref{e3}. Then for $e$ and $f$ in $E$ with $f\rel\om\annid e$ we have $\gle(e)\medvee\gle(f)=\gle(e\oplus f)$ and $\gle(e)\medcap\gle(f)=\{0\}$ in the lattice $\llat E =E/\gle$.\qed \end{prop} The above result can be extended. Let $E$ be as before and let $S$ be an idempotent generated regular semigroup with $E$ as the biordered set of idempotents. Suppose $e_1$, $e_2$, $e_3$ be elements of $E$ with $\mset{e_i}{e_j}=\{0\}$ for $i\ne j$. Since $\mset{e_1}{e_2}=\{0\}$, we have $e_1\rel\oml\annid{e_2}$ and since $\mset{e_2}{e_1}=\{0\}$, we have $e_2\rel\oml\annid{e_1}$, which implies $e_1\rel\omr\annid{e_2}$. Thus $e_1\rel\om\annid{e_2}$ and so we have $f_1=e_1\oplus e_2$ in $E$. In the same fashion, since $\mset{e_1}{e_3}=\{0\}$ and $\mset{e_2}{e_3}=\{0\}$, we have $e_1\rel\oml\annid{e_3}$ and $e_2\rel\oml\annid{e_3}$, so that $f_1=e_1\oplus e_2\rel\oml\annid{e_3}$, by Proposition~\ref{sumidalt}. Dually, we have $f_1\rel\omr\annid{e_3}$. Thus $f_1\rel\om\annid{e_3}$ and so we have $f_1\oplus e_3$ in $E$. As in the proof of Proposition~\ref{sumidalt}, we can show that this element of $E$ is the least upper bound of $e_1$, $e_2$ and $e_3$ with respect to $\oml$ and $\omr$ and so is uniquely determined by these elements. Hence we can unambiguously write $f_1\oplus e_3$ as $e_1\oplus e_2\oplus e_3$. Also, as in Proposition~\ref{sumidlat}, we have \begin{equation*} \gle(e_1)\medvee\gle(e_2)\medvee\gle(e_3)= \gle(e_1\oplus e_2\oplus e_3) \end{equation*} Again, for distinct $i$, $j$, $k$, we have $e_i\rel\oml\annid{e_k}$ and $e_j\rel\oml e_k$ so that $(e_i\oplus e_j)\rel\oml\annid{e_k}$, so that $\mset{e_i\oplus e_j}{e_k}=\{0\}$ and so $(e_i\oplus e_j)e_k=0$ in $S$. Hence \begin{equation*} \left(\gle(e_i)\medvee\gle(e_j)\right)\medcap\gle(e_k) =\gle(e_i\oplus e_j)\medcap\gle(e_k)=\{0\} \end{equation*} By induction, we have the following result. Note that elements $a_1,a_2,\dotsc,a_n$ of a lattice are said to be independent if for each $i=1,2,\dotsc,n$, we have $a_i\medwedge\bigl(\medvee_{\substack{j=1\\j\ne i}}^na_j\bigr)=0$ \begin{prop}\label{llatind} Let $E$ be a regular biordered set satisfying \ref{e1}, \ref{e2} and \ref{e3} and let $e_1,e_2,\dotsc,e_n$ be elements of $E$ with $\mset{e_i}{e_j}=\{0\}$ for $i\ne j$. Then $\gle(e_1),\gle(e_2),\dotsc,\gle(e_n)$ are independent elements in the lattice $\llat E=E/\gle$ with $\gle(e_1)\medvee\gle(e_2)\dotsb\medvee\gle(e_n)= \gle(e_1\oplus e_2\dotsb\oplus e_n)$.\qed \end{prop} We can show that as in the proof of Proposition~\ref{idpersp} that the condition \ref{e43} implies that $\gle(e_i)$ and $\gle(e_j)$ are in perspective. \begin{prop}\label{llatpersp} Let $E$ be a regular biordered set satisfying \ref{e1} and \ref{e2} and let $e$ and $f$ be elements in $E$ with $d_l(e,f)\le3$. Then $\gle(e)$ and $\gle(f)$ are in perspective in the lattice $\llat E=E/\gle$. \end{prop} \begin{proof} Since $d_l(e,f)\le3$, there exists $g$ and $h$ in $E$, with $e\rel\gle g\rel\gre h\rel\gle f$. Let $k=\annid g$. Then $k$ is in $E$ with $\annid k=g$. So, $\gle(k)$ is a complement of $\gle(\annid k)=\gle(g)$ in the lattice $\llat E$, by Corollary~\ref{llatcompl}. Also, since $g\rel\gle e$, we have $\gle(g)=\gle(e)$. Thus $\gle(k)$ is a complement of $\gle(e)$ in $\llat E$. Again, since $g\rel\gre h$, we have $\annid h\rel\gle \annid g=k$ so that $\gle(k)=\gle(\annid h)$ is a complement of $\gle(h)$ in $\llat E$. Also, since $h\rel\gle f$, we have $\gle(h)=\gle(f)$. Thus $\gle(k)$ is a complement of $\gle(f)$ also in $\llat E$. \end{proof} Now suppose $E$ is a regular biordered set satisfying \ref{e1}, \ref{e2}, \ref{e3} and \ref{e4}. Then by Theorem~\ref{srbsg}, there exists a strongly regular Baer semigroup $S$ with its biordered set of idempotents isomorphic with $E$. Also, the lattice of principal left ideals of $S$ is isomorphic with $E/\gle=\llat E$, so that $\llat E$ is a complemented modular lattice, as seen in Corollary~\ref{llatcm} In \cite{pa}, it is shown how a biordered set $E(L)$ can be constructed from a complemented modular lattice $L$ and it is shown that if $S$ is a strongly regular Baer semigroup with its lattice of principal left ideals isomorphic with $L$, then its biordered set $E(S)$ is isomorphic with $E(L)$. (see Corollary 4 of \cite{pa}) Hence for our biordered set $E$, we have the strongly regular Baer semigroup $S$ with its biordered set isomorphic with $E$ and lattice of principal left ideals isomorphic with $\llat E$, so that by the result cited above, $E(\llat E)$ is isomorphic with $E$. Also, since $E$ satisfies \ref{e41}, the members $\gle(e_1),\gle(e_2),\dotsc,\gle(e_n)$ form an independent set in $\llat E$, by Proposition~\ref{llatind}. Moreover, by the same result, if $h=e_1\oplus e_2\oplus\dotsb\oplus e_n$, then $\gle(e_1)\medvee\gle(e_2)\dotsb\gle(e_n)=\gle(h)$. Now $e_i\rel\om h$ for each $i$, by Proposition~\ref{sumidalt} and so $h=1$, by \ref{e42}. Hence $\gle(e_1)\medvee\gle(e_2)\dotsb \medvee \gle(e_n)=\gle(1)$. Also, \ref{e43} implies that these members of $\llat E$ are in perspective, by Proposition~\ref{llatpersp}. Thus this set is a \textit{homogeneous basis} of $\llat E$, in the sense of \cite{vn}. Thus $\llat E$ is a complemented modular lattice with a homogeneous basis of rank $n$ and so if $n\ge 4$, then there exists a regular ring $R$ with the lattice of principal left ideals of the matrix ring $R_n$ isomorphic with $\llat E$.(see Theorem 14.1 of \cite{vn}) Again in \cite{pa}, it is shown (see Theorem 5 of \cite{pa}) that if the lattice of principal left ideals of a regular ring is isomorphic with $L$, then the biordered set of the ring is isomorphic with $E(L)$. Hence in our case, the biordered set of idempotents of the regular ring $R_n$ is isomorphic with $E(\llat E)$ which is isomorphic with $E$. This proves our theorem.
proofpile-arXiv_059-15749
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{\label{sec:intro}Introduction} The use of micromagnetics based on the Landau-Lifshitz-Gilbert (LLG) equations for the simulation of dynamic hysteretic magnetization-magnetic field (MH) loops at room temperature and at kHz frequencies relevant for magnetic hyperthermia applications offers a challenging area of the study for coarse graining. For numerical studies based on micromagnetics, hysteretic heating is typically associated with the specific loss power (SLP) and is assumed to be proportional to the area of a calculated MH loop. In a recent work~\cite{BehCoarse-graining2020} (hereafter referred to as I), we employed and modified a renormalization group (RG) approach introduced by Grinstein and Koch~\cite{grinstein2003coarse} for our model system of magnetite (Fe$_3$O$_4$) nanorods that form the building blocks of nanoparticles used in preclinical magnetic hyperthermia trials on mice \cite{dennis2009nearly}. Our study focused on MH loops and demonstrated that for the case of individual nanorods, where exchange interactions, uniaxial anisotropy, and a sinusoidal external field are included in the model of uniformly magnetized cells, the RG approach works well over an entire range of fixed-volume rods composed of from 10752 cells ($b=1$) to one cell ($b=22$), where the smallest cell size of the scaling parameter $b=1$ corresponds to the dimensions of the magnetite unit cell. Our work also illustrates that significant additional computational speed-up can be achieved over the dynamic range of interest by maintaining a constant value for SR/$\alpha$, where SR is the designated sweep rate (in units of Oe/s) of the MH loop simulation and $\alpha$ is the LLG damping constant. This work, which employed OOMMF micromagnetics software~\cite{OOMMF}, omitted explicit magnetostatic interactions but these were accounted for through an effective uniaxial anisotropy. Here, our previous work is extended with several objectives. The first is to develop a coarse-graining algorithm for dynamic MH loops for a single nanorod that has explicit magnetostatic interactions included (in addition to the scaling of the magnetization, exchange, anisotropy and applied field used previously), which were omitted in the RG analysis of Grinstein and Koch~\cite{grinstein2003coarse}. This study allows for the estimation of an effective single-ion anisotropy that mimics the effects of the self-demag field. The second goal is to examine MH loops corresponding to magnetic nanoparticles (NPs) that are constructed from the nanorods where the impact of inter-rod exchange and inter-rod magnetostatic interactions are examined. This part of the study examines the case of just two adjacent nanorods in various geometries, and finishes with composites of 10 stacked rods, inspired by the experimental study of Dennis~et al.~\cite{dennis2009nearly}. Different stackings represent varying degrees of orientational order of nanorods within a NP. Loops corresponding to a variety of applied field orientations are examined. The third goal is to find the effective magnetization and anisotropy that allows the modelling of a NP as a single macrospin, both in the case of a single NP in a field and for two interacting NPs. This macrospin approximation may be useful for further study of NP assemblies. In addition, the impact of cell size on the assigned time step in the OOMMF LLG solver is studied, where a larger time step can be used with larger cell sizes resulting in an additional increase in computational efficiency. Magnetic hyperthermia as a novel and developing cancer treatment method continues to attract considerable attention at the applied as well as fundamental level ~\cite{shi2019enhanced, pearce2013magnetic, munoz2017towards, mehdaoui2013increase, dennis2013physics, allia2019nonharmonic}. A wide range of preclinical studies have been reported using magnetic hyperthermia as a primary or secondary cancer treatment along with conventional chemotherapy or radiotherapy~\cite{chang2018biologically, dennis2009nearly, thiesen2008clinical, sadhukha2013inhalable}. Moreover, recent analytical and numerical studies~\cite{usov2012dynamics, simeonidis2016situ, anand2016spin, torche2020thermodynamics, serantes2014multiplying, mehdaoui2013increase, anandhi2020factors} reflect the growing need for understanding the heating mechanisms of magnetic hyperthermia to provide a more accurate guide for experiments. In magnetic hyperthermia, injected magnetic nanoparticles exhibit hysteresis under applied magnetic field and heat up and damage cancerous tumor cells. As nanoparticles are mobile inside the tumor upon injection, exploring the effects of interactions between magnetic particles, as well as possible heating mechanisms such as Brownian rotation or hysteresis heating (N\'eel relaxation), is crucial for understanding particle clustering and heating efficiency. To this end, many studies have investigated the impact of long range dipolar interactions on hyperthermia with interesting and related results ~\cite{anand2016spin, landi2014role, haase2012role, cabrera2018dynamical, serantes2014multiplying, mehdaoui2013increase, wu2017magnetic}. For example, Anand~et al.~\cite{anand2016spin} examined the effect of dipole interaction strength on the heating efficiency of micron-sized particles and showed that there is an optimal NP volume fraction for maximizing SLP. Haase and Nowak~\cite{haase2012role} reported a negative effect of dipolar interactions on SLP at high particle concentrations. By contrast, Landi~\cite{landi2014role} used a mean field theory and found that the dipole interactions increase the energy barrier between stable configurations of the magnetization. He deduced that dipolar interactions improve SLP as long as certain conditions of the energy barrier of the system are met. Such studies motivate a bottom-up approach to determining and modeling effective interparticle interactions, and underline the importance of including magnetostatic interactions in our scaling approach. This paper is organized as follow. Our model is described in section II. Section III summarizes the coarse-graining scheme we use and in section IV we test the scaling method for multiple nanorods. Section V contains a more detailed investigation of the effect of inter-rod exchange and magnetostatic interactions to examine their effect on magnetization dynamics in a system of two nanorods. In section VI, three nanorod composites of varying internal orientational order are introduced and their effective macrospin parameters are determined. In section VII we study the hysteresis loops of 2 NP as a function of separation, and test the macrospin models in this context. Finally, we present our conclusions in Section VIII. As choosing the proper time step for simulating a system of study is another challenging detail in such numerical studies~\cite{lopez2012micromagnetic, kapoor2006effect}, we address it for our system in Appendix A. \section{\label{sec:model}The model} \begin{figure} \includegraphics[width=\columnwidth ]{Fig1.pdf} \caption{Coarse-graining model of a magnetite nanorod. The smallest micromagnetic cell corresponds to the cubic unit cell of length $a_0=0.839$~nm with ferrimagnetic atomic spins represented by single magnetic moment. Larger cells are characterized by a length $a_b=b\,a_0$ for $b>1$. The number of cells is reduced from $56\times24\times8=10752$ to $N_b$=10752/$b^3$ =1344, 168 and 21 for $b=$2, 4 and 8 respectively. A single block corresponds to $b=22$. Nanoparticles are made of nanorods.}\label{fig:micromagnetics} \end{figure} We wish to simulate iron oxide nanorods made of magnetite or maghemite ($\gamma$-Fe$_2$O$_3$), while including magnetostatic interactions. These two iron oxides have similar magnetic parameters, with the exception of crystalline anisotropy, which is cubic in magnetite and uniaxial for maghemite. Our research is inspired by experimental results reported by Dennis et al.~\cite{dennis2009nearly}, in which simulated nanorods are the building blocks of nanoparticles (see Fig.~2 therein). We study here assemblies of nanorods from two to bundles of ten as single nanoparticles to explore their collective heating behaviour by calculating hysteresis loops. For simulating nanorods with nominal dimension 6.7 nm $\times$ 20 nm $\times$ 47 nm (Fig.~\ref{fig:micromagnetics}), we use the Object Oriented MicroMagnetic Framework (OOMMF)~\cite{OOMMF}, and the smallest simulation cell we use has the dimensions of the unit cell of ferrimagnetic magnetite, represented by a single magnetization vector. We employ the Theta Evolve module~\cite{theta_evolve} required for simulations at finite $T$. The Landau-Lifshitz-Gilbert (LLG) equation is commonly used to describe the dynamics of magnetic moments~\cite{cullity2011introduction, gilbert2004phenomenological, brown1963thermal} by describing the precession and damping of a cell's magnetic moment in an effective field. The value of damping constant $\alpha$, representative of energy dissipation, for magnetite films has been reported in a range from 0.03 to 0.2 depending on the thickness~\cite{serrano2011thickness}. Setting $\alpha$=0.1 for our system size is consistent with other reported micromagnetic studies~\cite{plumer2010micromagnetic, usov2010low}. The effective field combines Zeeman, exchange, magnetocrystalline anisotropy and magnetostatic terms. Additionally, Brown~\cite{brown1963thermal} provided a formalism to add thermal effects into the calculations via a random effective field. It is known that thermal fluctuations are more pronounced for smaller simulation volumes prone to superparamagnetism and simulation results strongly depend on cell size~\cite{grinstein2003coarse, lopez2012micromagnetic, lee2004excitations}. We explore the cell size and time step correlation in Appendix~\ref{app:time_stp} for simulations at finite $T$. As in I, we use the bulk magnetite parameters with a saturation magnetization $M_s=480$~kA/m~\cite{dutz2013magnetic, usov2013properties, heider1988note} and exchange stiffness constant $A_0=0.98 \times 10^{-11}$~J/m~\cite{heider1988note, kouvel1956specific, moskowitz1987theoretical, glasser1963spin, srivastava1979exchange, srivastava1987spin, uhl1995first} which leads to the critical temperature of $T_c=858$~K for its cubic unit cell size $a_0=0.839$~nm. Magnetite (Fe$_3$O$_4$) possesses cubic crystalline anisotropy~\cite{shi2019enhanced, plumer2010micromagnetic, usov2013properties, abe1976magnetocrystalline, vreznivcek2012magnetocrystalline}, and as it has only a weak tendency to produce hysteresis, we omit it in magnetite simulations. However, nanorods may contain significant amounts of maghemite with uniaxial crystalline anisotropy with energy density of $K_0$=10 kJ/m$^3$ ~\cite{shi2019enhanced, plumer2010micromagnetic, shokrollahi2017review}, used in maghemite simulations in the present study. Otherwise, we use the same parameters for maghemite as for magnetite. To restrict the uncontrolled heat generated by Eddy currents, the product of amplitude $\times$ frequency of the AC magnetic field should be less than a threshold that limits the sweep rate of the applied AC field to SR=4$H_{\mathrm{max}}f < 0.25$ Oe/ns~\cite{hergt2007magnetic, dutz2013magnetic}, with frequency $f$ of a sinusoidal field of amplitude $H_{\rm max}$. (It is noteworthy that safe higher thresholds have been reported for particular types of cancerous tissue~\cite{albarqi2019biocompatible, simeonidis2016situ}.) As in I, all of the dynamic hysteresis loops reported in the present study are performed at $T=310$~K, and we use SR=25~Oe/ns and $\alpha$=10. This combination of SR and $\alpha$ is equivalent to the hyperthermia-relevant SR=0.25~Oe/ns and $\alpha$=0.1 for magnetite NPs. This method of increasing $\alpha$ to simulate an effectively slower SR provides significant computational speed-up~\cite{BehCoarse-graining2020}. The nanorod that we simulate has dimensions $8a_0 \times 24a_0 \times 56 a_0$ (with volume $V_{\rm rod}=6350.0$ nm$^3$), with its longest edge along the $z$ axis. The rod is made up of N$_b$ cubic cells with side length $a_b=ba_0$ ($b=1$, 2, 4, 8) while the volume of the rod is fixed for all simulations. A rod is composed of 10752 cells when the smallest cell (b=1) is used, and employing larger cells reduces the number of cells dramatically, as $N_b=10752/b^3$, to 1344, 168 and 21 for $b =2$, 4 and 8, respectively. Ultimately, RG scaling enables the description of a rod as a block, corresponding to $b=22$ ($\sqrt[3]{8\times24\times56}$), with a single magnetization vector with essentially the same hysteresis loop as obtained with the smallest cell size, even with magnetostatic interactions included. The impact of coarse-graining on loops is then examined for collections of nanorods that form nanoparticles as a foundation for simulating groups of NPs; see Fig.~\ref{fig:micromagnetics}. In calculating hysteresis loops for any cell size, we apply an external magnetic field along the $z$ axis of $H(b) = H_{\rm max} \sin{(2 \pi f t)}$. When uniaxial anisotropy is present, anisotropy directions for different cells within a nanorod are given by small random angles from the long axis of the rod (usually the $z$-axis) drawn from a normal distribution with a standard deviation of 5$^{\circ}$, i.e., anisotropy is along the long axis but with a small dispersion to imitate lattice disorder~\cite{plumer2010micromagnetic, serantes2014multiplying}. $M(b)$ is the $z$ component of the magnetization, which we calculate by averaging over 90 to 100 independent simulations (averaging at each value of the field). We report either $M(b)$ or its normalized form $m_H = M(b)/M_s$. At the beginning of a loop calculation, magnetic moments are randomized and $M(b)$ is approximately zero. For the first quarter period, $H(b)$ goes from 0 to $H_{\mathrm max}$, and we report results for the subsequent period. The error bars for the coercive field $H_c$ are calculated as one standard error above and below its mean value, obtained by considering the standard deviation in $H_c$ over the simulation ensemble used for each loop calculation. \section{\label{sec:RG} Coarse-graining and demagnetization} \begin{figure*}\centering \includegraphics[width=0.45\textwidth]{Fig2a.pdf} \includegraphics[width=0.45\textwidth]{Fig2b.pdf} \includegraphics[width=0.45\textwidth]{Fig2c.pdf} \caption{(a) Rod hysteresis loops when none of the magnetic parameters are scaled. (b) Scaling based on our modified Grinstein-Koch RG method~\cite{grinstein2003coarse,BehCoarse-graining2020} ($\delta\simeq$0.511) as in Eqs (1) - (5), but with no scaling of magnetostatic interactions. (c) Scaling magnetostatic energy is included with the factor $D_{\mathrm{scl}}=\zeta(b)^{3}$. } \label{fig:RNG} \end{figure*} A few different approaches to scaling magnetic parameters such as $K$ and $A$ with simulation cell size have been proposed in the literature~\cite{grinstein2003coarse, CoarseGrainingFengVisscher, kirschner2005cell, kirschner2006relaxation}. As presented in I, we follow a modified version of the RG approach of Grinstein and Koch~\cite{grinstein2003coarse}, which results in a set of equations for the magnetization, exchange stiffness, applied field, and anisotropy constant, \begin{eqnarray} M_0 &=& \delta \zeta(b) M(b) + (1-\delta) M(b) \label{eq:corrM0}\\ A(b) &=& \zeta(b) \times A_0 \label{eq:RNG_scalingA}\\ H(b) &=& \zeta(b) \times H_0 \label{eq:RNG_scalingH}\\ K(b) &=& \zeta(b)^3 \times K_0 \label{eq:RNG_scalingK} \end{eqnarray} where, \begin{equation}\label{eq:RNG_zeta} \zeta(b)=t/b+1-t, \qquad t=T/T_c, \end{equation} $A_0$, $K_0$, $H_0$ and $M_0$ are the quantities for simulations using cell size $a_0$, $T_c$ is the critical temperature, and the quantities $A(b)$, $K(b)$, $H(b)$ and $M(b)$ are those for a simulation where the cell size is $a_b=ba_0$. The phenomenological parameter $\delta=0.511$ was determined in I from the $T$ dependence of $M$ for our nanorods. In the present work, we propose and test a scaling for magnetostatic interactions, not included in the work of Grinstein-Koch or in I. As a first step in determining a scaling for magnetostatic interactions, we calculate a reference hysteresis loop for $b=1$ for a maghemite nanorod by running simulations using $A_0$ and $K_0$ for the exchange and uniaxial anisotropy parameters, respectively, and include magnetostatic interactions. Results are given by the red curve in all panels of Fig.~\ref{fig:RNG}. We then carry out loop simulations with cell sizes $b a_0$, for $b=2$, 4, and 8. For $b=22$, the dimensions of the single cell are those of the nanorod itself. For these simulations, we use unrenormalized exchange and anisotropy parameters $A(b)=A_0$, $K(b)=K_0$, and again include magnetostatic interactions. The loops resulting from these non-scaled simulations are plotted in Fig.~\ref{fig:RNG}a, showing a very significant increase in loop size as cell size increases. We repeat the loop calculations for $b>1$ using values of $A(b)$ and $K(b)$ from Eqs.~\ref{eq:RNG_scalingA} and \ref{eq:RNG_scalingK}, respectively, and with $M$ and $H$ scaled via Eqs.~\ref{eq:corrM0} and \ref{eq:RNG_scalingH}, so that we plot $m_H=M_0/M_s=(\delta \zeta(b)+1 -\delta)M(b)/M_s$ as a function of $H_0=H(b)/\zeta(b)$, and again we include full magnetostatic interactions. The resulting hysteresis loops are different for different $b$, with coercivity increasing with cell size, as shown in Fig.~\ref{fig:RNG}b. From the above results, it is clear that magnetostatic interactions need to be scaled as cell size changes. Looking at the energy terms in the Hamiltonian (see Appendix~\ref{app:time_stp}) and noting that the exchange energy ($aA\sum m_i.m_j$) is proportional to the cell length and $A$ is scaled with $\zeta(b)$, whereas the magnetocrystalline anisotropy energy ($K_uv\sin^2(m_i.u)$) is proportional to the cell volume and $K_u$ is scaled with $\zeta^3(b)$, we propose a $\zeta^3(b)$ scaling for the demagnetization energy, which is also proportional to the cell volume. The magnetostatic energy is $\mu_0 v M_s^2 m\cdot N\cdot m/2$, where the demagnetization tensor $N$ is determined by the geometry of the system. We repeat the loop calculations for $b >1$, again using RG scaling for $A$, $K$, $M$ and $H$, but now multiply magnetostatic energies and torques by $\zeta(b)^3$. As can be seen in Fig.~\ref{fig:RNG}c the collapse of the data is reasonably good. The loop areas for $b=1$, 2, 4, 8 and block simulations are 1881, 1706, 1691, 1703, 1800 Oe, respectively. The smallest loop area (for $b=8$) is 9\% smaller than the area for $b=1$. We note that comparing the above hysteresis loops with a system without magnetostatic interactions (Fig. 2c in I) supports a result from Mehdaoui~et al.~\cite{mehdaoui2013increase}, namely, that including magnetostatic interactions increases the squareness of the loops. To accomplish the scaling of magnetostatic interactions when using OOMMF, we take the approach of scaling $M_s$, while ensuring that all other terms in the effective field remain unchanged. The magnetostatic energy is proportional to $M_s^2$. Therefore, multiplying $M_s$ by $\zeta(b)^{3/2}$ results in the desired scaling of magnetostatic interactions with $\zeta(b)^3$. At the same time, scaling $M_s$ changes the non-magnetostatic contributions to the effective field entering the LLG calculations, namely the exchange, anisotropy and thermal contributions. We must therefore introduce addition scaling to preserve $\mathbf {H}_{\mathrm{eff}}=\mathbf{H}_{\mathrm{exch}}+\mathbf{H}_{\mathrm{anis}}+\mathbf{H}_{\mathrm{ext}}+\mathbf{H}_{\mathrm{thermal}}$ invariant to changes in $M_s$. Thus, when changing program input $M_s$ to $M_s\zeta(b)^{3/2}$, we must additionally change $A$ to $A\zeta(b)^3$, $K$ to $K\zeta(b)^{3/2}$ and $T$ to $T\zeta(b)^{3/2}$ in order to keep field strengths $H_{\mathrm{exch}}=2A/\mu_0 a M_s^2$, $H_{\mathrm{anis}}=2K/\mu_0 M_s$, and $H_{\rm thermal}= \left[2\alpha k_B T/(\gamma \mu_0 M_s V \Delta t)\right]^{1/2}$ unaltered. The end result is that in order to carry out a simulation at $b>1$ and temperature $T_0$, we first calculate $\zeta=\zeta(T_0, b)$, and the set program inputs to $M_s = M_{s0} \zeta^{3/2}$, $A = A_0 \zeta^6$, $K=K_0 \zeta^{9/2}$, and $T=T_0\zeta^{3/2}$. The external field $H(b)$ is unchanged. This recipe combines the RG scaling of $A$ and $K$ with appropriate scaling of magnetostatics, and yields $M(b)$. The next step is to model the collective effect of the magnetocrystalline anisotropy, exchange and magnetostatic interactions of a rod with a single magnetization (macrospin) subject to uniaxial anisotropy. This step is justified by the rather good agreement in the MH loops between the fine grain simulation ($b=1$), and the single block case ($b=22$), for which a single magnetization represents the entire rod and no explicit exchange interactions are present. This macrospin description is know as the Stoner-Wohlfarth (SW) model, and the Hamiltonian is, \begin{equation} \begin{split} &\mathcal{H}= \mathcal{H}_{\mathrm{anisotropy}} + \mathcal{H}_{\mathrm{Zeeman}},\\ & \mathcal{H}_{\mathrm{anisotropy}}=-K_{\mathrm{eff}}v (\mathbf{m}\cdot\mathbf{u})^2,\\ & \mathcal{H}_{\mathrm{Zeeman}} = -\mu_0M^{\rm eff}_sv (\mathbf{m}\cdot \mathbf{H}), \end{split} \end{equation} where the uniaxial anisotropy has energy density $K_{\mathrm{eff}}$ with its axis along $\mathbf{u}$, and the single magnetization vector has direction $\mathbf{m}$ and magnitude $M^{\rm eff}_s$. $K_{\mathrm{eff}}$ and $M^{\rm eff}_s$ arise from the combined effects of self-demagnetization, magnetocrystalline anisotropy, exchange, and temperature. For the macrospin model of the nanorod $v=V_{\rm rod}$. $\mu_0$ is the permeability of free space and $\mathbf{H}$ is the externally applied field. This SW macrospin model may be useful for simulating a group of nanorods in solution, for example, and it is understood that interactions between rods include magnetostatic interactions, perhaps in the dipole approximation. This macrospin description differs from the $b=22$ block model in that, first, the self-magnetostatic interaction is accounted for by the effective uniaxial anisotropy, and second, there is no need to worry about the procedures to implement RG and magnetostatic scaling. To find the appropriate parameters to model the nanorod as a SW-macrospin at 310~K, we calculate the hysteresis loop of the nanorods modelled using $b=4$, averaging over directions between field and long nanorod axis. Given the symmetry of the rod, it is sufficient to integrate directions over a spherical octant, and, following the numerical algorithm presented in Ref.~\cite{bavzant1986efficient}, we employ a seven-point integration scheme, with directions shown in the inset of Fig.~\ref{fig:RodSW}a. We also calculate the directionally averaged loop for a SW particle at 310~K by simulating 1000 particles with random orientations (uniformly over a shpere), and then scaling the parameters of the SW particle to match $H_c$ and remanent magnetization $M_r$ of the rod. For a magetite rod ($K=0$), we find that $K_{\rm eff}=15.7$~kJ/m$^3$ and $M^{\rm eff}_s = 0.73 M_s = 350$~kA/m. Results are plotted in Fig.~\ref{fig:RodSW}a. It is important to note that if one wished to plot $m_H$, one should normalize $M_H$ by $M_s$, rather than by $M^{\rm eff}_s$, in order to compare with nanorod loops. For a maghemite rod ($K_0=10$~kJ/m$^3$), we find $K_{\rm eff}=19.4$~kJ/m$^3$ and $M^{\rm eff}_s = 0.73 M_s = 350$~kA/m. From the loops shown in Fig.~\ref{fig:RodSW}a, it is clear that the rod does not precisely follow the SW model. This is because the magnetostatic interactions within the rod only approximately map to a single anisotropy axis. In Fig.~\ref{fig:RodSW}b, we plot the MH loops for the $b=4$ approximation for the rod and the SW counterpart when the field is along the $z$ axis, i.e., along the anisotropy axis. In this case, we find a smaller value of $K_{\rm eff}=15.0$~kJ/m$^3$ for magnetite, with $M_s^{\rm eff} = 0.8 M_s=384$~kA/m. This value of $K_{\rm eff}$ is smaller than the analytical result at $T=0$, $K^{T=0}_{\rm eff}=20.5$~kJ/m$^3$, which we obtain by following Refs.~\cite{cullity2011introduction, Newell1993generalization, fukushima1998volume, Aharoni1998}. For maghemite, we obtain $K_{\rm eff}=18.7$~kJ/m$^3$ with $M_s^{\rm eff} = 0.80 M_s=384$~kA/m. All effective parameters are summarized in Table~\ref{table:Keff}. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Fig3a.pdf} \includegraphics[width=0.49\textwidth]{Fig3b.pdf} \caption{Comparison of macrospin models with magnetite and maghemite nanorods for (a) rotationally averaged external field -- SW refers to the macrospin in this case, and (b) external field along the $z$ axis -- MS refers to the macrospin in this case. } \label{fig:RodSW} \end{figure} \section{Coarse-graning for multiple nanorods\label{multiple_nanorods}} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{Fig4a.pdf} \includegraphics[width=0.49\textwidth]{Fig4b.pdf} \caption{Scaling applied to a bundle of 8 nanorods with inter-rod exchange $A_{\rm r-r}=0.5 A(b)$. (a) Loops corresponding to simulation cells of length $a_b=b a_0$ for $b=2$, 4, 8 and 22 (block) for a field applied along the $z$ axis. (b) Loops with a rotationally averaged field for nanorods modelled with $b=4$ and 22 (block). } \label{fig:8Rod_scaling} \end{figure*} As our goal is to simulate magnetic nanoparticles made of nanorods, we test the proposed scaling method for a collection of eight maghemite nanorods in two stacks of four as shown in the inset of Fig.~\ref{fig:8Rod_scaling}. Simulations include magnetostatics, intrarod [$A(b)$] and inter-rod ($A_{\rm r-r}$) exchange interactions at half strength [$A_{\rm r-r}=0.5 A(b)$], magnetocrystalline uniaxial anisotropy along the rod's long axis and a sinusoidal field applied along the $z$ axis. Simulated MH loops for the eight-rod bundle show good agreement for $b=2$, 4 and 8, whereas the loop is significantly different for a bundle of eight blocks ($b=22$) as shown in Fig.~\ref{fig:8Rod_scaling}a. Clearly, modelling the nanorod as a block with a single magnetization does not allow portions of a nanorod to flip independently of the rest of the rod, and hence the shoulder regions of the loop in particular are susceptible to unphysical behaviour. Thus, magnetostatic interactions limit the present prescription for course-graining in the case of bundled nanorods. We expand our exploration by comparing the average MH hysteresis loop of this group of nanorods when the applied field is rotationally averaged. Interestingly, averaging over field directions masks the discrepancy between $b=4$ and the block approximation, as shown in Fig.~\ref{fig:8Rod_scaling}b. We conclude that $b=4$ is a reasonable level of coarse-graining for the investigation of multiple-rod configurations in the remainder of the present work. \section{\label{sec:2rods}Various 2-rod setups} In this section we qualitatively explore the effects of magnetostatic and exchange interactions for three different arrangements of two magnetite nanorods, providing some insight on their effects on the magnetization alignment for bundled nanorods. We use RG scaling with $b=4$, and, for this section only, we do not carry out the scaling of magnetostatic interactions, and simply use $M_s$ with no alteration in determining effective fields and energies. We are simply interested in general effects of the interplay between magnetostatics and inter-rod exchange. \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{Fig5a.pdf} \includegraphics[width=0.45\textwidth]{Fig5b.pdf} \includegraphics[width=0.45\textwidth]{Fig5c.pdf} \caption{(a) Effect of inter-rod magnetostatic interactions. The black loop (stars) is for two perpendicular noninteracting nanorods (with neither exchange, nor magnetostatics between rods) and the green loop is for nanorods interacting magnetostatically only. In panel (b) nanorods interact magnetostatically and inter-rod exchange is $A_{\mathrm{r-r}}=xA(b)$, with $x = 0$ for the blue curve (stars), 0.05 for the red curve (circles) and 0.5 for the green curve (squares). The two parallel nanorods are in contact with their largest faces and the center-to-center distance is 6.7~nm. Panel (c), as in (b), except a smaller face is shared, and center-to-center distance is 20~nm. In this case, increasing $x$ does not result in a larger loop area.} \label{fig:2rods} \end{figure*} In the first arrangement, we consider only the effect of magnetostatic interaction between rods. One nanorod is placed along the $z$ axis, and the other along $x$, with the $y$ axis passing through the nanorod centers, as shown in the inset of Fig.\ref{fig:2rods}a. The external field is along the $z$ axis. Within each rod, magnetostatics and exchange are present. For the black curve in Fig.~\ref{fig:2rods}a, the rods do not interact: they are independent with $A_{\rm r-r}=0$ and with no magnetostatic interactions between cells belonging to different rods. The loop, in fact, is just the average of two independent rods. The green curve in the same plot shows the loop for the case where the two rods interact magnetostatically: magnetostatic interactions are calculated between all cells in the 2-rod system. The hysteresis loop is smaller for the interacting case. This negative effect of magnetostatics on loop area is in agreement with studies reported by Cabrera~et al.~\cite{cabrera2018dynamical} and Serantes~\cite{serantes2014multiplying}, wherein dipole interactions decrease the heating efficiency of magnetic particles when the dipoles are not arranged in end-to-end chains. Panels b and c of Fig.~\ref{fig:2rods} compare hysteresis loops, when inter-rod magnetostatic interactions are present, for three different inter-rod exchange strengths $A_{\mathrm{r-r}}=x A(b)$, with $x$ = 0, 0.05, and 0.5. Here, the nanorods are side-by-side with their long axes parallel. Fig.~\ref{fig:2rods}b considers the case of rods with their largest faces making contact (area of contact is 84 $a_4^2$), and Fig.~\ref{fig:2rods}c considers the case where the nanorods are making contact through their second largest faces (area of contact is 28 $a_4^2$). The centers of adjacent parallel nanorods are 6.7~nm and 20~nm apart in panels b and c, respectively. In general, increasing $x$ increases the magnetization alignment between the two nanorods, counteracting the anti-alignment induced by magnetostatics. In Fig.~\ref{fig:2rods}b, for $A_{\rm r-r}=0$ the magnetization of one rod flips before $H$ becomes negative. For $A_{\rm r-r}=0.05 A(b)$ and 0.5$A(b)$, the magnetizations of the two rods are locked, and higher exchange strength results in wider hysteresis loops. For the larger centre-to-centre separation (and therefore weaker inter-rod magnetostatic interactions) and smaller contact area presented in Fig.~\ref{fig:2rods}c, for $A_{\rm r-r}=0$, the magnetization of one of the rods flips before the other, but only after the $H$ becomes negative. At $A_{\rm r-r}=0.05 A(b)$, when the magnetization of one rod flips, it takes part of the second rod with it. Only at $A_{\rm r-r}=0.5 A(b)$ do the magnetizations of both rods flip in unison. We note that for $b=4$, the exchange length is $\sqrt{\frac{2 \zeta(4) A_0}{\mu_0 M_s^2}} \approx 7.0$~nm, and therefore significantly smaller than the centre-to-centre distance. The perhaps counter-intuitive observation is that as $A_{\rm r-r}$ increases, the loop area decreases. We conclude that the pairing of inter-rod exchange and magnetostatics can lead to complex magnetization dynamics within nanorod composites, and therefore counter-intuitive impacts of inter-rod exchange on heating efficiency. In all 2-rod cases considered, we explicitly place the rods side-by-side and not end-to-end. Thus, we do not consider chain formation~\cite{torche2020thermodynamics}, which should enhance hysteresis, but rather the tendency of magnetostatics to cause anti-alignment of neightbouring nanorod magnetic moments. We note that the larger centre-to-centre distance considered in Fig.~\ref{fig:2rods}c means that the anti-aligning effects of magnetostatics is weaker, and so perhaps it is not surprsing to see a larger loop area than in Fig.~\ref{fig:2rods}b in the $A_{\rm r-r}=0$ case. \section{Nanoparticles\label{sec:np}} \begin{figure*} \centering \includegraphics[width=0.47\textwidth]{Fig6a.pdf} \includegraphics[width=0.52\textwidth]{Fig6b.pdf} \caption{ Three different NPs, $10z$, $8z2y$ and $6z4y$, each assembled from 10 maghemite nanorods. The right graph shows the NP hysteresis loops for rotationally averaged field (solid curves with symbols), and loops for their equivalent macrospins with the same $M_r$ and $H_c$ (dashed curves). Macrospins equivalents to each NP have $K_{\rm eff}= 5.7$, 8.78 and 10.79 kJ/m$^3$ for $6z4y$, $8z2y$ and $10z$ NPs, respectively, and $M^{\rm eff}_s = 382$~kA/m.} \label{fig:NP_SW} \end{figure*} Our basic model of nanoparticles composed of nanorods is inspired from the experimental study by Dennis~et al.~\cite{dennis2009nearly}. There are, however, no data on how nanorods are packed within a nanoparticle, and two extreme possible assemblies are a totally ordered stack of nanorods and a random cluster of nanorods~\cite{pearce2013magnetic}. Among various possible arrangements, we choose three assemblies containing 10 maghemite ($K_0=10$ kJ/m$^3$) nanorods, one with all the nanorods along the $z$ axis (which we label $10z$), another one with 8 along the $z$ axis and 2 along the $y$ axis ($8z2y$) and a third arrangement with 6 nanorods along $z$ and 4 along $y$ ($6z4y$), as shown in Fig.~\ref{fig:NP_SW}a. With these three choices, we mimic some degree of disorder by varying the degree of rod alignment. To compare the heating efficiency of these constructions with the experimental results, we calculate the rotationally averaged hysteresis loop, coarse-graining the rods at the $b=4$ level (including magnetostatic scaling) and assuming $A_{\rm r-r}=0.5 A(b)$. As expected, assemblies with more parallel nanorod arrangement exhibit wider hysteresis loops, as shown in Fig.~\ref{fig:NP_SW}b, which leads to higher heating efficiency. The next step in simplifying the simulation of NPs is to find the magnetic parameters of a SW macrospin that gives the most similar MH hysteresis loops (the same $M_r$ and $H_c$) to nanoparticles of the same volume. This level of modelling enables the description of a complex nanoparticle made of nanorods with a single macrospin and replacing all the magnetostatic and exchange interactions inside the NP with an effective uniaxial anisotropy of the macrospin. The resulting fits, made by adjusting $K_{\rm eff}$ and $M_s^{\rm eff}$, are shown in Fig.~\ref{fig:NP_SW}b, and the effective uniaxial anisotropy for the three maghemite nanoparticle models $10z$, $8z2y$ and $6z4y$ are 10.79, 8.78 and 5.7~kJ/m$^3$, respectively, with effective saturation magnetization equal to $0.795 M_s = 382$~kA/m for all three models. Effective parameters for maghemite and magnetite nanoparticles are given in Tab.~\ref{table:Keff}. \begin{figure} \includegraphics[width=0.49\textwidth]{Fig7a.pdf} \includegraphics[width=0.49\textwidth]{Fig7b.pdf} \caption{Impact of changing the field direction on loops for NPs composed of rods and equivalent macrospin (MS) particles. a) MS particle with $K_{\mathrm{eff}}$=4.8 kJ/m$^3$ and $M_s \simeq$382 kA/m has the same $M_r$ and $H_c$ as a $6z4y$ magnetite NP ($K_0=0$) under rotationally averaged field (black curve, circles, labelled SW in legend), whereas it exhibits different MH hysteresis loops when field is applied along the $x$, $y$ or $z$ axes. For the filed applied along the $z$ axis, an MS with $K_{\mathrm{eff}}$=3.64 kJ/m$^3$ yields a similar hysteresis loop to the NP. b) Field along $z$, the effective anisotropy of $6z4y$ maghemite NP decreases (relative to a rotationally averaged field) from 5.7 to 5.28~kJ/m$^3$, for $8z2y$ NP it decreases from 8.78 to 6.32~kJ/m$^3$and for $10z$ maghemite NP from 10.79 to 7.63~kJ/m$^3$. }\label{fig:NP_Hz_Keff} \end{figure} As with the case of individual rods, it is expected that a single anisotropy axis is not completely sufficient to model the magnetic response. Fig.~\ref{fig:NP_Hz_Keff}a shows the response of the $6z4y$ magnetite nanoparticle model to both rotationally averaged fields and for fields along $x$, $y$ and $z$ directions, along with corresponding responses of the SW macrospin model that best matches the rotationally averaged response of the nanoparticle ($K_{\rm eff}=4.8$~kJ/m$^3$). The nanoparticle loops for the $x$ and $y$ directions are non-linear at moderate field magnitudes and have non-zero loop areas, while the macrospin model shows linear response until saturation and zero loop area. Also shown is the loop for the macrospin model with a reduce effective anisotropy ($K_{\rm eff}=3.64$~kJ/m$^3$) that best matches the nanoparticle's repsonse to a field in the $z$ direction. Fig.~\ref{fig:NP_Hz_Keff}b shows that lower values of $K_{\rm eff}$ are need to reproduce the response of maghemite nanoparticles to fields along $z$. The equivalent effective anisotropy under $H_z$ decreases to 5.28, 6.32, and 7.63 kJ/m$^3$ for $6z4y$, $8z2y$, $10z$ maghemite nanoparticles, respectively. Effective parameters are summarized in Tab.~\ref{table:Keff}. The up to approximately 35\% difference in $K_{\rm eff}$ values comparing rotationally average and $z$ responses can either be regarded as a model error when using the macropsin model for future purposes, or one may preferentially choose one scenario over the other depending on context. For example, in a medium in which the nanoparticles are free to rotate and therefore can align anisotropy axes along the field, the lower $K_{\rm eff}$ values obtained from the $z$ response should be used, while for randomly oriented particles unable to rotate, the rotationally averaged may be more relevant. \begin{table} \caption{The effective anisotropy and saturation magnetization of macrospins equivalent to simulated nanorods and nanoparticles. The bulk saturation magnetization is $M_s=480$ kA/m.} \centering \begin{tabular}{|m{1.5 cm} |m{1.5cm} | m{1.4 cm}| m{1.7 cm} |m{1.2 cm}|} \hline material \newline & object \newline & $K_{\mathrm{eff}}$ (kJ/m$^3$) & $H$ \newline & $M_s^{\rm eff}$ (kA/m) \\ \hline Fe$_3$O$_4$ & nanorod & 15.73 & rot. avg. & 350 \\ \hline Fe$_3$O$_4$ & nanorod & 15.0 & $ ||z$ & 384 \\ \hline $\gamma$-Fe$_2$O$_3$ & nanorod & 19.4 & rot. avg. & 350 \\ \hline $\gamma$-Fe$_2$O$_3$ & nanorod & 18.7 & $||z$ & 384 \\ \hline Fe$_3$O$_4$ & $6z4y$ NP & 4.80 & rot. avg. & 382 \\ \hline Fe$_3$O$_4$ & $6z4y$ NP & 3.64 & $||z$ & 382 \\ \hline $\gamma$-Fe$_2$O$_3$ & $6z4y$ NP & 5.70 & rot. avg. & 382 \\ \hline $\gamma$-Fe$_2$O$_3$ & $6z4y$ NP & 5.28 & $||z$ & 382 \\ \hline $\gamma$-Fe$_2$O$_3$ &$8z2y$ NP & 8.78 & rot. avg. & 382 \\ \hline $\gamma$-Fe$_2$O$_3$ &$8z2y$ NP & 6.32 & $||z$ & 382 \\ \hline $\gamma$-Fe$_2$O$_3$ &$10z$ NP & 10.79 & rot. avg. & 382 \\ \hline $\gamma$-Fe$_2$O$_3$ &$10z$ NP & 7.63 & $||z$ & 382 \\ \hline \end{tabular}\\ \label{table:Keff} \end{table} \section{\label{sec:TwoNPs}Interacting Nanoparticles } \begin{figure*} \centering \includegraphics[width=0.51\textwidth]{Fig8a.pdf} \includegraphics[width=0.48\textwidth]{Fig8b.pdf} \includegraphics[width=0.48\textwidth]{Fig8c.pdf} \caption{a) Hysteresis loops for a system of 2 magnetite $6z4y$ NPs as a function of centre-to-centre distance $r$. $d$ is the NP diameter. b) The quantities $\Delta H_c$ and $\Delta S$ (see main text for definitions) as functions of $r$ approach dipolar scaling near $r/d=1.5$ ($\ln1.5 \approx 0.405$). Dashed lines are $1/r^{-3}$ power laws. (c) $H_c$ as a function of $r$ for the 2-NP loops from panel (a), along with $H_c$ obtained from macrospin approximations to the NPs, realized through uniformly magnitized cubes (MS) and dipolar spheres (Dipole). Error bars for the dipole curve are comparable to symbol size. } \label{fig:SLP_r2NP} \end{figure*} As a prelude to later explorations of the collective heating behavior of NP chains, as in Ref.~\cite{bakuzis2013chain}, we simulate two magnetite $6z4y$ NPs ($K_0=0$) and study how their hysteresis loop changes as the nanoparticle center-to-center distance $r$ varies from one to three NP diameters ($d=47.0$~nm). For these simulations, the centres and anisotropy axes of both NPs lie on the $z$ axis, the external field is also along $z$, and we use $b=4$ for coarse-graining (including magnetostatic scaling). This arrangement mimics chain formation when NPs are free to move and rotate. As shown in Fig.~\ref{fig:SLP_r2NP}a, the hysteresis loop area is larger in the case of two interacting chained NPs compared to isolated NPs. This is in agreement with reported results~\cite{cabrera2018dynamical, serantes2014multiplying, mehdaoui2013increase}. We note that the normalization of the loop is such that the total heat released would require multiplication by the number of particles in the system. As $r$ increases, the effect of magnetostatic interactions between NPs is reduced and their loop area shrinks\cite{torche2020thermodynamics}. By $r\approx 3d$, the loop is approximately the same as for noninteracting NPs. To quantify the $r$ dependence of the loop area and $H_c$, we plot the difference in areas $\Delta S$ between loops for the 2-NP systems and individual NPs ($\Delta S=$ Area(2NPs) - Area(1NP)), as well as the difference in the coercivities $\Delta H_c$, as functions of $r$ in Fig.~\ref{fig:SLP_r2NP}b. As may be expected, for $r> 1.5 d$, $\Delta S$ and $\Delta H_c$ decrease with a $1/r^3$ dependence, just as the energy between two dipoles does. This motivates using the dipole approximation to calculate the heating efficiency of NPs when they are further apart than $1.5 d$. To this end, we carry out two additional sets of simulations. First, we use the effective macrospin parameters for the $6z4y$ magnetite NP ($K_{\mathrm{eff}}=3.64$ kJ/m$^3$, $M_s$=381.6 kA/m) and simulate two magnetized cubes with the same volume as the NP, placing their centres and anisotropy axes on the $z$ axis, and calculating loops as we vary $r$. For these simulations, we include magnetostatics interactions, both between the cubes and within each cube. Allowing for self-demagnetization is technically inconsistent with our approach, but the self-demagnetization leads to cubic anisotropy that has little effect on hysteresis loops. Similarly, we simulate with Vinamax software~\cite{leliaert2015vinamax} two spheres with the same effective parameters as the cubes and dipole moment $v M_s^{\rm eff}$, thus neglecting self-demagnetization (as is consistent with the effective parameters) and treating interaction between spheres in the dipolar approximation. We report $H_c$ for the two macospin models and the $6z4y$ NPs in Fig.~\ref{fig:SLP_r2NP}c, with cubes labelled {\it MS } and spheres labelled {\it Dipole}. The agreement between all three sets of data is satisfactory for $r\ge 1.5 d$. \section{Conclusions}\label{sec:sonclusion} The present work represents the first comprehensive study of coarse-graining for use in micromagnetic simulations. We extend an RG-based coarse-graining scheme, previously developed and explored in I, to include magnetostatic interactions in micromagnetic simulations, and apply it to dynamic hysteresis loops at $T=310$~K of magnetite (no magnetocrystalline uniaxial anisotropy) and maghemite nanorods, as well as collections of stacked nanorods that model NPs of varying internal orientational order. For individual nanorods, the coarse-graining procedure reproduces loops even up to the representation of the nanorod as a block with a single magnetization. For collections of rods, the interplay between inter-rod exchange and magnetostatic interactions can lead to complex magnetization dynamics, and we limit our level of coarse graining to $b=4$ (cell length four times larger than the unit cell of magnetite) when simulating 10-nanorod model NPs. For both individual nanorods and NPs, we find the effective uniaxial anisotropy and saturation magentization parameters for SW macrospin models that yield equivalent loops. For nanorods, the effective anisotropy is approximately 15-16~kJ/m$^3$ for magnetite, and approximately 19~kJ/m$^3$ for maghemite. The effective saturation magnetization is 73\% to 80\% of the bulk value, depending on whether orientation with respect to the external field is assumed to be rotationally averaged or parallel. For our 47~nm-diameter NPs, the effective anisotropy falls in the range of 4~kJ/m$^3$ for our most orientationally disordered ($6z4y$) magnetite NP to 11~kJ/m$^3$ for our most ordered ($10z$) maghemite NP. The effective saturation magnetization is approximately 80\% of the bulk value. For this modelling, we assume an inter-rod exchange strength of half the bulk value. For simulations of two NPs, we find that loop area, or rather the difference in loop areas between interacting and noninteracting NPs, scales with distance in a dipole-like manner for centre-to-centre distances at and beyond 1.5 times the particle diameter. For this distance and beyond, we find good agreement between the two-NP results and those for two macrospin equivalents interacting via dipolar interactions. Thus, our methodology starts with micromagnetic parameters at the unit cell level, and, through coarse-graining, allows us to carry out micromagnetic simulations of nano-sized particles with properly scaled parameters and magnetostatic interactions. Further, we find equivalent macrospin models with effective anisotropies and saturation magnetizations that can be used in micromagnetic or molecular dynamics simulations involving a large number of NPs. This knowledge allows for the simulation of larger systems with more detail than is normally assumed within macrospin models and should be extendable to non-hyperthermia applications. We also find (Appendix A) that using a larger cell size allows the use of a larger step size in integrating the equations of motion. Over the range of cell sizes studied, we approximately find that if cell volume is increased, the step size may also be increased by the same factor. \section*{Acknowledgment} We thank Michael J. Donahue for discussions and his expert guidance on how to achieve the scaling of magnetostatics with OOMMF. We also thank Mikko Karttunen for useful discussions, and both him and Styliani Consta for hosting our stay (RB and ISV) at Western University. We acknowledge the financial support from the Natural Sciences and Engineering Research Council (Canada). Computational resources were provided by ACENET and Compute Canada.
proofpile-arXiv_059-15750
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} The raw information map where a robot saves a representation of its sensed information is essential to the robot's tasks such as planning, navigation, and obstacle avoidance. However, because it consists of low-level information which is difficult for humans to comprehend, it is not a suitable representation to use for a shared understanding of the world with the user. A semantic map on the other hand is a richer representation of the environment that can be understood by both the robot and the user. It is better suited to be a common frame of reference and a user-facing representation. \begin{figure*}[!htb] \centering \includegraphics[width=180mm]{images/semantic_map_update.png} \caption{\textbf{Semantic map update system.} Semantic map, consisting of semantics S annotated on the raw map M, is created using algorithms A or user input U. Semantic map transfer attempts to transfer all semantics from the previous map N to map N+1 at the end of robot mission N+1 with the requirement that the semantics remain valid. Conflicts in the semantics are resolved and/or new semantics are discovered in the Resolution and Discovery module. Map meta layer is introduced during conflict resolution to keep semantics valid without altering the robot's raw map. Finally the Update module determines whether to update or reject the new information from mission N+1.} \label{transfer_block} \end{figure*} We employ semantic maps in our fleet of floor-cleaning robots where users use them extensively to interact with their robots. Our robots use a visual SLAM (vSLAM \cite{c22_Eade10}) localization system to generate and maintain a lifelong map \cite{c24_Banerjee19} of its environment. Our semantic map which consists of walls, doors, and rooms of the home is generated through algorithms \cite{c25_Kleiner17} and user annotations. The semantic map is central to the user's interactions with the robot. It enables intuitive experiences such as cleaning a specific room through an app or voice command. Deploying semantic maps on a large fleet of robots exposed several common challenges in robot mapping that needed to be addressed in the context of semantic mapping. Noise in the sensing, dynamic objects in the environment, and the robot uncovering new space that it had not previously explored are three such problems that occur regularly in practical robots. We present solutions for these specific problems through a framework that is general enough to handle other similar problems. In real-world situations, under noisy sensing and dynamic environments, the low-level information may be sensed differently between different robot runs, whereas the high-level semantics of the environment are not expected to change significantly. As a user-facing representation, it is desirable that the semantic map is stable across slight perturbations of the underlying raw map, but flexible to account for significant and semantically meaningful changes in the environment. The semantics, once established, are the common frame of reference for the robot and user. For instance, the user can intuitively communicate to the robot to clean a particular room and the robot would know exactly what to do. It is essential that once semantics have been exchanged between the user and the robot, they remain valid for the lifetime of the robot. Semantic maps have been discussed extensively in the literature \cite{c1_Kostavelis15, c7_Landsiedel17, c8_Nuchter08, c9_Ruiz17, c13_Galindo05}. They can result in better understanding of indoor maps \cite{c2_Rusu09, c3_Pangercic12, c4_Zender08} and outdoor spaces \cite{c5_Lang14, c6_Wolf08}, and can be useful in tasks such as planning \cite{c10_Galindo08}. While several works address the challenges in semantic maps in a single robot run, relatively fewer have focused on multiple sequential runs \cite{c11_Mason12, c12_Galindo07}, which is a requisite for lifelong mapping. When a lifelong mapping robot performs a new run (called \emph{mission}), it is normal to update its raw map using recent information from its sensors. Every raw map update must be accompanied by an update of the semantic map. The advantage of updating the raw and semantic map every mission is that the robot maintains the most recent belief of its environment. For lifelong mapping, localization in dynamic environments \cite{c19_Tipaldi13}, efficient management and update of maps \cite{c15_Brki18, c17_Pomerleau14}, and map summarization \cite{c16_Dymczyk16, c18_Muhlfellner16} have been studied previously. The challenges in ensuring valid semantics for lifelong robots have not been not addressed. Semantic constraints \cite{c14_Limketai05, c13_Galindo05}, particular dynamic ones \cite{c20_Asada89} are an important aspect of maps. Semantics are often constrained by how they relate to each other and to the raw map. An example constraint in our system is that doors must be attached to a wall. Unsuccessful update of semantics implies that there are inconsistencies (or \emph{conflicts}) in the constraints. For instance, a door which previously separated two rooms may be inconsistent by appearing between two different rooms after an unsuccessful update. When semantics are updated incorrectly, the assumption that the robot and user share a common understanding of the environment is violated. We use algorithms to detect conflicts and resolve them. For conflict resolution, we propose the use of an additional map layer called map-meta layer with some additional meta-semantics. Use of multiple-layered maps is not new in the literature. For instance, Zender et al. \cite{c21_Zender07} use multiple layers, with each layer representing a different level of abstraction. We use the additional map layers for explicitly handling the inevitable inconsistencies over sequential map updates. If the semantic map update is not successful despite the meta-layer based resolution, the map update may be discarded and the map reverted to the valid state from the previous mission. Apart from maintaining previously established semantics, when the robot senses significant changes in the environment, there is potential for discovering new semantics. A discovery step enables computation of these semantics and their annotation on the map. Figure \ref{transfer_block} shows a block diagram of our system that consistently updates the semantic map across multiple missions while allowing for the robot to learn and incorporate environment changes into the semantic map. We motivate and illustrate the semantic map system through our use-case of floor-cleaning robots. Our semantics are derived on simple 2D floorplan maps, but we believe the building blocks and principles described in this paper are useful for other complex maps that involve semantics and constraints. The internal algorithms for the building blocks are explained at a high level with less emphasis on the details because the exact algorithms will vary greatly depending on the robot's type and its purpose. The paper is organized as follows. Section \ref{Overview} describes our robot's localization and mapping system, semantics and constraints, and the various modules of the semantic map update system. The problem of jitter in robot's sensed map is addressed by our algorithm to transfer semantics from mission to mission in Section \ref{Transfer}. Dynamic objects can lead to different types of semantic conflicts. Conflicts, their automatic detection, and resolution using meta-semantics are explained in Section \ref{Conflicts}. Section \ref{Discovery} handles discovery of new space and finally Section \ref{Update} recaps the update step of the semantic map system. Quantitative results in Section \ref{Results} show our system's efficacy on hundreds of real home maps. \section{Semantic map system overview}\label{Overview} \subsection{SLAM to Semantic map}\label{SemanticMap} Our robots use a visual SLAM \cite{c22_Eade10} (vSLAM) algorithm that tracks visual landmarks and integrates various sensor modalities to generate a map of the house. Once generated, the vSLAM map is updated for the lifetime of the robot across hundreds of missions using efficient techniques for managing the complexity while retaining its efficacy \cite{c24_Banerjee19}. The vSLAM map is crucial for the robot to localize itself accurately at different locations and times in the home. Another important map in the system is the \emph{Occupancy map} that represents a top-down view of the home where white represents \emph{free} areas, black represents \emph{occupied} areas where walls or objects were sensed (called \emph{obstacles}), and grey represents \emph{unexplored} regions. This map is saved internally through a set of sub-grids that are inter-connected through pose constraints and are allowed to move across each other\cite{c23_Llofriu17}. The pose constraints come from the vSLAM algorithm and hence there is a tight coupling between the occupancy map and the vSLAM localization system. Figure \ref{occ} shows an example of our two-dimensional occupancy map. Because the obstacles are detected by a bump sensor of a limited resolution and due to uncertainty in a robot's estimate of its own pose, the occupancy map can be noisy and difficult for a user to comprehend. Extraction of walls, dynamic objects (called \emph{clutter}), rooms, and dividers from the occupancy map yields a semantic map of Figure \ref{map_1}, which is much easier to understand. Shown as \textbf{Semantic map creation} module in Figure \ref{transfer_block}, the semantics are both estimated automatically \cite{c25_Kleiner17} and annotated directly by the user through an app interface. \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/db_orig.png} \caption{Raw occupancy map in robot} \label{occ} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/db_orig_map_labeled.png} \caption{Semantic map derived from raw map of Fig \ref{occ}} \label{map_1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/db_new.png} \caption{New raw map after a subsequent mission. Compared to earlier map in Fig \ref{occ}, previously sensed walls appear disconnected (highlighted in red) and an opening has closed (blue)} \label{occ_current_marked} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/db_new_result.png} \caption{Updated semantic map. Semantics from Fig \ref{map_1} successfully transferred to raw map in Fig \ref{occ_current_marked}} \label{map_1_transfer} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/db_new_no_recovery.png} \caption{Semantic map transfer failure. Failed attempt to transfer semantics from Fig \ref{map_1} to raw map in Fig \ref{occ_current_marked}. The changes highlighted in Fig \ref{occ_current_marked} cause rooms $5$,$11$, and $12$ to not be recovered correctly.} \label{map_1_transfer_fail} \end{figure} \subsection{Multiple missions and the Semantic map}\label{MultiMissionSemanticMap} After the semantic map has been created, in subsequent missions, the robot may sense its world differently because of changes in the world or uncertainty in the robot's representation of the world. Further the vSLAM system constantly updates the robot's position and improves its belief about its world as it observes visual information. Hence even pixels representing static objects may move slightly in the occupancy map to reflect this belief. For instance, Figure \ref{occ_current_marked} shows the occupancy map of the same space as Figure \ref{occ}, but at the end of a different mission. The red and blue ovals show the key changes sensed by the robot. The red ovals show previously connected obstacles now disconnected either due to moved objects or sensing errors. The blue oval shows a previously open path closed. Here, the robot sensed obstacles where the previous doorway was and decided that it can not enter the room. Although the underlying raw map varies slightly with each mission, the user is generally not interested in these perturbations because semantic concepts tend to be stable in a home. We would hence like to show the user a stable semantic map by transferring the semantics correctly from the previous map to the new one. Figure \ref{transfer_block} shows that as a robot performs a new mission, it updates its internal map ($\mathbf{M^\prime}$). The requirement for the \textbf{Semantic map transfer} block is that the semantics are updated ($\mathbf{S^\prime}$) while remaining valid. Successful transfer of semantics is shown in Figure \ref{map_1_transfer}. All the rooms and dividers are reproduced correctly in the transferred map. An unsuccessful transfer attempt shown in Figure \ref{map_1_transfer_fail} is undesirable because some rooms have been lost and others have changed their shape significantly. \begin{table*}[h] \caption{Semantics for indoor robots} \label{Semantics_table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Semantic} & \textbf{Physical meaning} & \textbf{Constraint} & \textbf{Motivation for constraint}\\ \hline \hline Occupancy pixels & Occupied regions in map & Must be sensed by robot & Used as cost map in planning, constraint\\ & & & helps use robot's best estimates for occupied pixels\\ \hline Clutter & Temporary/dynamic objects & Lies on occupancy pixels & Yields accurate visualization on semantic map\\ \hline Wall & Shape of environment & Within a distance of occupied pixels & Yields accurate representation of rooms\\ \hline Divider & Boundary between rooms & Ends on wall or another divider & Enclosed rooms are well-defined \\ & & Annotated by user & User is final authority on segmentation of the space\\ \hline Room & Spaces in the environment & Defined by wall sections and dividers & Room polygon tightly corresponds to physical space,\\ & & & hence robot can cover space satisfactorily\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Semantic constraints}\label{Constraints} The semantic map for our floor-cleaning robot is designed with several constraints. While some constraints reflect practical limitations of the robot's application, others ensure consistency in the semantic map's visualization. Table \ref{Semantics_table} lists the semantics and their constraints in our semantic map. To emphasize the significance of constraints, let us consider the constraint enforced on the \emph{room} semantic. A room is an enclosed space in the home, defined by \emph{walls} and \emph{dividers}. Walls are constrained to lie within a threshold distance of a sensed occupancy pixel and dividers must end on a wall or on another divider. This means that the room polygon thus described tightly fits the physical walls of the home as sensed by the robot. \subsection{Conflict resolution and discovery}\label{ConflictResolutionDiscovery} Each semantic map update must be guaranteed to maintain all the defined constraints. If semantics become invalid because of a constraint violation, this is automatically determined as a \textbf{Conflict} using various criteria in Figure \ref{transfer_block}. Conflict resolution is done either through algorithms or requesting user input. To aid conflict resolution, we add an additional layer called \textbf{Map Meta} to the semantic map. The role of the meta-semantics layer is to add or remove relevant pieces of information that make the semantics valid while keeping the raw map unaltered. When valid resolution is not possible, the map update is rejected and the semantic map reverts to the earlier state. When user input is requested for conflict resolution, the user can either provide additional information to fix the map or reject the update. At the end of each mission, the robot also determines if the map has changed significantly and if there is a possibility of annotating new semantics. This \textbf{Discovery} phase in Figure \ref{transfer_block} enables new semantics to be added through algorithms or user input. Next, the general solutions for the three common problems of jitter, dynamic objects, and new space are described. \section{Transfer of map semantics}\label{Transfer} The first key problem in dynamic environments is to keep the semantic map stable even though the underlying raw map may jitter. The problem can be viewed as one of \emph{transferring} the semantics from the previous raw map to the current raw map. Note that our robots already have a very strong vSLAM framework for tracking visual landmarks in the home across missions. Further, our semantics have a spatial component by virtue of being represented on a 2D map. A natural solution is to leverage the vSLAM framework with modifications to track the spatial components of the desired semantics. At the beginning of a robot's mission, we integrate the vertices of the rooms and the dividers into the vSLAM system as explicit spatial points to track. As the robot moves and updates its location and map, the locations of the rooms and dividers get tracked by the vSLAM system. The room vertices and dividers move according to the motion estimated by the underlying vSLAM system. At the end of the mission, vSLAM returns motion estimates for the rooms and dividers. It is not sufficient to use these estimates as the final result since the current shape of the rooms or walls may have changed due to removal and adding of clutter objects to the wall, opening and closing of doors, or moving of large pieces of furniture. Further, the semantic constraints we defined in table \ref{Semantics_table} for walls required them to be within a threshold distance of sensed occupied pixels. Similarly, rooms boundaries were constrained to consist of wall sections and dividers. In order to guarantee these constraints, we first estimate the new walls from the occupancy map, associate the previously tracked divider end-points to the new walls, and then reconstruct the room shape from the new walls and dividers. Figure \ref{basic_transfer} shows the tracked boundaries and dividers visualized underneath the final boundaries and dividers in an example map. It shows the inaccuracy in the tracked room shape and dividers - they do not line up exactly with the walls in the new map. Divider end-point association and room reconstruction are hence essential for maintaining the specified constraints. \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/result_anchor_into_result_json.png} \caption{Misalignment between tracked room boundaries and new boundary in a map. The new boundary, estimated from the new raw occupancy map is shown in black and is rendered on top of the tracked room boundaries rendered in red.Jitter in the map is evident in the red segments.} \label{basic_transfer} \end{figure} It is also important to keep wall and clutter semantics consistent across maps. We achieve this by using the tracked room shapes again. Obstacles in the new occupancy map that are within a threshold distance of the tracked room boundaries are labelled as wall obstacles. Obstacles that lie in the interior of previously tracked room boundaries are labelled as clutter obstacles. Wall estimation is done on the new map after this wall and clutter classification of obstacles. The result is that walls and clutter are consistent between the previous and new maps. \begin{algorithm} \caption{Semantic map transfer algorithm} \label{transfer_alg} \begin{algorithmic} \STATE \textbf{1} Track room vertices and divider end-points by integrating them into the SLAM system \bindent \STATE (A new occupancy map and tracked room boundaries are the result at the end of the robot's new mission) \eindent \STATE \textbf{2} Wall information transfer: \bindent \STATE Classify obstacles that are within certain distance \STATE to tracked room boundaries as wall obstacles \eindent \STATE \textbf{3} Clutter information transfer: \bindent \STATE Classify obstacles that are in the interior \STATE of tracked rooms as clutter obstacles \eindent \STATE \textbf{4} Estimate new walls from the new occupancy map, using the classification from steps 2 and 3 \STATE \textbf{5} Obtain new divider end-points by moving the tracked end-points to the nearest wall \STATE \textbf{6} Reconstruct new room shapes from the new walls and divider end-points \end{algorithmic} \end{algorithm} \section{Semantic map conflicts}\label{Conflicts} The transfer algorithm \ref{transfer_alg} can result in a semantic map with conflicts. We use precision and recall of rooms to determine when the transfer is successful. Figure \ref{PR} illustrates the precision and recall for a single room. We compute the precision and recall for each individual room in the map. Semantics transfer for a given mission is deemed successful, if all the rooms in the previous map have been transferred to the new map with precision and recall higher than a $50\%$ threshold. Several other criteria may be used, such as failure to transfer all dividers, large unexpected change in room shape, and change in connectivity between rooms. Successful semantics transfer is shown in Figure \ref{success} where all rooms from the previous map on the left have a high precision and recall in the transferred new map. Figures \ref{fail1} and \ref{fail2} show examples of failures. \begin{figure}[!htb] \centering \includegraphics[width=25mm]{images/PR-image.png} \caption{Precision-Recall for a single room. Yellow room from previous map is compared to the red room in the current map. The overlap region is shown in orange. Recall is $\frac{area(C)}{area(A) + area(C)}$ and Precision is $\frac{area(C)}{area(B) + area(C)}$} \label{PR} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=85mm]{images/success.png} \caption{Successful transfer with high precision-recall of all rooms} \label{success} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=85mm]{images/fail1.png} \caption{Failure case 1 - low recall for room 9 (zero recall), low precision for room 12} \label{fail1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=85mm]{images/fail2.png} \caption{Failure case 2 - low recall for room 3} \label{fail2} \end{figure} \subsection{Conflict resolution and the need for meta-semantics}\label{Resolution} The second key problem is of dynamic objects in the home. As objects move around in between missions, they can cause significant changes in the occupancy map. The constraints defined for walls and room boundaries mean that when the sensed occupancy pixels differ greatly between missions, it may not be possible to satisfy all constraints. One solution is to alter the robot's raw map so that the semantics may be recovered. However this is not desirable because the raw map has implications for low-level robot behavior and planning. We propose introducing an additional layer of meta semantics for conflict resolution. Represented as Map Meta in Figure \ref{transfer_block}, it enables the recovery of semantics while keeping the raw map unaltered. Following are some examples of common sources of conflict and recovery. \subsubsection{Disconnected walls}\label{disconnected_walls} The previous map's walls may be broken up into disjoint sections in the new map. When this happens, despite correct placement of the previous dividers, the new rooms cannot be reconstructed correctly from the walls and dividers. To solve this problem, we take the difference between the tracked room boundaries from algorithm \ref{transfer_alg} and the newly estimated walls. The difference is processed to obtain sections that were wall in the previous map and are not wall in the new map. Adding back these connected components to our wall estimation algorithm makes the wall connections reappear in the new map. The proposed corrections are biased towards relying on information from the previous map. Figure \ref{disconnected} shows the previous map and the result of semantic transfer. The walls that got disconnected (highlighted in red) cause the room shapes to not be reconstructed correctly. The underlying cause of the disconnections is apparent in the corresponding raw occupancy maps shown in Figure \ref{disconnected_occ}. \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/disconnected.png} \caption{Disconnected wall. Room $1$ from previous map cannot be recovered in new map} \label{disconnected} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/disconnected_occ.png} \caption{Disconnected occupancy map. Underlying cause for disconnected wall in Fig \ref{disconnected}} \label{disconnected_occ} \end{figure} By adding back the wall difference image shown on the left in Figure \ref{wall_difference} at appropriate sections, we recover a connected occupancy map shown in the center. Only difference sections that adjoin failed rooms are added back. Performing wall estimation and semantics transfer on this corrected occupancy map results in the correct result on the right. \begin{figure}[!htb] \centering \includegraphics[width=85mm]{images/adding_wall.png} \caption{Disconnected wall correction. Adding the \emph{wall difference image} (left) to obtain the modified meta-occupancy map (middle) corrects the disconnected wall in the result semantic map (right)} \label{wall_difference} \end{figure} \subsubsection{Newly connected walls}\label{new_connected_walls} This is the converse problem to disconnected walls - two wall sections that were disconnected in the previous map appear to be connected in the new map. The solution is the converse - we take the difference image between the free space within the tracked room boundaries and the free space within the new walls. The connected components from this difference image represent sections that are free in the previous map but wall in the new map. Adding back these free sections to the occupancy map yields the original openings between walls. The corrected occupancy map and difference sections are not saved directly in the robot's occupancy pixels, but as meta-occupancy pixels in the Map Meta layer. Note that adding back all the wall and free difference sections estimated between the previous and new map will result in a corrected occupancy map that is identical to the previous occupancy map. This would mean that the robot disregards any newly sensed information from the environment and is not desirable. We avoid this by only adding difference sections that adjoin rooms that failed to transfer and only if the difference sections are smaller than a pre-determined threshold. Thus, relatively small corrections are made to the map. Large changes sensed by the robot are always kept as is. \subsubsection{Additional dividers}\label{separators} The third key problem mentioned earlier was that of new space being explored by the robot. Sometimes it becomes necessary to add additional dividers to keep the previous and new map's room boundaries consistent. For example, imagine that a new passage connects previously disconnected rooms in a home. An example of this is shown in Figure \ref{map_grow_conflict}. In a new mission, the robot explored and found a connecting space between previous rooms. Since this region was unexplored in the previous map, the difference information of Sections \ref{disconnected_walls} and \ref{new_connected_walls} does not help. The new connection between the previous rooms causes ambiguity for the robot. This conflict can be detected automatically and can be solved either through user input or by automatically placing additional meta-dividers. Meta-dividers are placed by determining which rooms were lost and adding back all the tracked boundary segments of the lost room. The meta-divider based room recovery shown in the bottom of Figure \ref{map_grow_conflict} shows successful recovery of all the previous room shapes and inference of a new room. \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/map_grow_conflict.png} \caption{Adding additional separators. New space is explored by the robot in (b) that connects rooms $1$, $3$, and $6$, resulting in their not being recovered by the semantic map transfer algorithm. Adding additional separators that are not present in (a) allows for recovery of all the original rooms and creation of new room $11$ in (c)} \label{map_grow_conflict} \end{figure} \begin{table*}[h] \caption{Meta semantics with relaxed constraints} \label{meta_semantics_table} \begin{center} \begin{tabular}{|c|c|c|} \hline \textbf{Meta-semantic} & \textbf{Relaxed constraint} & \textbf{Corresponding semantic (and constraint)}\\ \hline \hline Meta-occupancy pixels & Need not be sensed by robot & Occ. pixels ( Must be sensed by robot)\\ \hline Meta-divider & Ends on wall or meta-divider & Divider (Ends on wall or divider, \\ & Not annotated by user & must be annotated by user) \\ \hline \end{tabular} \end{center} \end{table*} \subsection{Map Meta layer}\label{MapMeta} Meta-semantics added to the map meta layer help re-establish the original set of semantics. They hence have a form that is similar to the original semantics. However, since they need to explain an underlying conflict, the constraints on them have to be relaxed. For example, we added meta-occupancy and meta-divider semantics to the Map Meta layer. The meta-occupancy pixels are a version of the occupancy pixels, but the requirement that the occupied regions represent real obstacles is not met. A meta-divider is like a divider. But unlike a divider which has to have been annotated beforehand at specified regions of the map, it can be drawn anywhere as required. The relaxation in the constraints of the Map Meta layer semantics allows flexibility in the system to resolve conflicts and guarantee the constraints of the original semantics. Table \ref{meta_semantics_table} summarizes our meta-semantics and their relaxed constraints. \section{Discovery}\label{Discovery} The third key problem in life-long mapping is exploration for previously unseen space by the robot. We have seen an example of this problem when it yields conflicts in Section \ref{separators}. In general, for a robot that constantly updates its map across missions, a mechanism to discover new semantics is essential. The robot must have the ability to process the change in its map and determine if there are potentially new semantics to be discovered. When pre-determined conditions are triggered, our system launches a discovery phase to gather new semantics from the raw map. Discovery of new rooms in the home is one such example. When any previous room appears to grow larger than a given threshold in the new map, we trigger discovery of new rooms in the map by running an automatic divider estimation algorithm in the grown region. Figure \ref{new_space} shows an example. The map at the end of a new mission shows the previous rooms correctly transferred and a new room added in the newly explored section of the home. Once the new semantics are discovered, they become available for the robot's use and are subsequently maintained by the semantic map system. \begin{figure}[!htb] \centering \includegraphics[width=70mm]{images/map_growth_example.png} \caption{New space exploration example. Semantics discovery module detects newly explored space and infers an additional room $4$ in the new map} \label{new_space} \end{figure} \section{Update}\label{Update} Updating the semantic map with new information allows the robot to use the information for future missions. The consequences of updating the robot's semantic map with an invalid one are unexpected robot behavior and poor user experience. To avoid these, the validity of the semantics and their mutual constraints are verified automatically at the end of the semantics transfer, conflict resolution, and discovery phases. Only when deemed valid, the new semantic map is set as the robot's map for future missions. If found invalid, the robot reverts to the previous map at the cost of losing information from the new mission. \section{Results}\label{Results} The successful use of our semantic maps by thousands of real users is the most significant result of this work. We also benchmark our system on a large internal data set from real homes with cleaning missions performed under no explicit user instructions. The data set hence represents realistic usage of a consumer robot. For quantitative evaluation in this paper, we present results on a small randomly chosen set of $25$ robots and $425$ missions from our internal data set. We use the precision-recall based criterion described earlier to determine when a semantic map is successfully updated. Table \ref{results_table} shows the error rate - the fraction of all missions where the semantic map update failed. Running the semantic map transfer algorithm \ref{transfer_alg} as a baseline results in an error rate of $20.92\%$. Including the meta-occupancy semantic and re-estimating walls and clutter based on the meta occupancy grid reduces the error to $11.06\%$. Including the meta-divider semantic along with the meta-occupancy results in an error rate of $1.41\%$. We thus have a highly accurate system for consistent update of semantics across the multiple missions. \begin{table}[h] \caption{Results} \label{results_table} \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Condition} & \textbf{Error rate}\\ \hline \hline Semantic map transfer (baseline) & $20.92\%$\\ \hline Transfer + Meta-occupancy & $11.06\%$ \\ \hline Transfer + Meta-occ + Meta-dividers & $1.41\%$ \\ \hline \end{tabular} \end{center} \end{table} \section{Conclusions}\label{Conclusions} We have described principles for enabling a lifelong mapping robot that learns and maintains a semantic map of its world. Through deployment on a fleet of real robots, we uncovered key challenges in semantic maps and their maintenance over time in dynamic environments. Spatial transfer of semantics, automatic conflict detection, use of meta-semantics for conflict resolution, and discovery of new semantics form the building blocks of an intuitive user-facing semantic map system that remains stable despite the perturbations in the raw map. Our semantic map update system addresses the specific key challenges and serves as a framework for addressing future ones. \addtolength{\textheight}{-12cm}
proofpile-arXiv_059-15751
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Gaussian processes (GPs) \cite{Rasmussen} are flexible, interpretable, and powerful non-parametric statistical methods which provide accurate prediction with a low amount of uncertainty. They apply Bayes' theorem for inference, which allows them to estimate complex linear and non-linear structures without the need for restrictive assumptions of the model. They have been extensively used in practical cases, e.g. optimization \cite{Shahriari}, data visualization, and manifold learning \cite{Lawrence}, reinforcement learning \cite{Deisenroth2013}, multitask learning \cite{Alvarez}, online streaming models \cite{Huber, Le}, and time series analysis \cite{Petelin, Tobar}. The main bottleneck of using standard GPs is that they poorly scale with the size of the dataset. For a dataset of size \textit{N}, the training complexity is $\mathcal{O}(N^3)$ because the inversion and determinant of the $N\times N$ kernel matrix are needed. The prediction over a test set and also storing the results suffers from an additional complexity of $\mathcal{O}(N log N)$. This issue currently restricts GPs to relatively small training datasets, the size of which is typically in the order of $\mathcal{O}(10^4)$. To deal with large datasets two different strategies are used. The first strategy is based on sampling a small subset of the full dataset. The methods that follow this strategy try to train a GP on the smaller subset and then generalize the results. A simple method in this case, is subset-of-data (SoD) \cite{Chalupka} which only uses a subset of size \textit{m} from the original dataset; its training complexity is $\mathcal{O}(m^3)$ where $m<<N$. Since this approach ignores the remaining data, it has limited performance. Another method which is called sparse kernel or compactly supported kernel \cite{Gneiting,Melkumyan}, ignores the observations that are not correlated or show a covariance that is smaller than a threshold. In radial-based kernels, if the distance between two different entries is larger than a determined value, their covariance are set to zero. Although the training complexity of this method is $\mathcal{O}(\alpha N^3)$, for certain interesting cases, it does not guarantee that the new modified kernel is positive semi-definite. The most popular method in this area is the sparse approximation approach, which employs a subset of the data (called inducing points) and \textit{Nyström} approximation to estimate the prior and posterior distributions \cite{Titsias, Hensman}. For \textit{m} inducing points, the training complexity is $\mathcal{O}(Nm^2)$. Although the authors provide a full probabilistic model using the Bayesian framework, it is not conceivable to apply the method to large and high dimensional datasets because its capability is restricted by the number of the inducing points \cite{Bui, Moore}. The second strategy is to divide the full dataset into partitions, train the local GPs in each partition \cite{Snelson, Urtasun}, and then aggregate the local approximations \cite{Cao, Deisenroth2015, Liu}. Unlike sparse approximations, this local approach can model quick-varying systems and non-stationary data features. Since in this family, the training procedure is run in different subsets, the final prediction may be affected by the regions with the poor predictive performance or by discontinuous predictions in overlapping sub-regions. The most popular local approximation methods are the mixture of experts (MoE) and the product of experts (PoE). The MoE works as a Gaussian mixture model (GMM). It combines the local experts with their hyper-parameters and improves the overall predictive power \cite{Tresp2001, Masoudnia}. The main drawback of this method is that a joint training is needed to learn the mixing probabilities and the individual experts. This joint training positively affects the predictive power and helps control the experts with poor performance, but - on the negative side - it increases the complexity \cite{Cao}. The prominent product of experts (PoE) \cite{Hinton} and Bayesian committee machine (BCM) \cite{Tresp} provide a new framework for GPs. Independent experts are GPs that are learned separately. Both methods suffer from the discontinuity issue and the weak experts' problem \cite{Camps-Valls, Liu}. The generalized product of experts (GPoE) \cite{Cao} and robust Bayesian committee machine (RBCM) \cite{Deisenroth2015} propose different aggregation criteria, which are robust to weak experts' predictions. To cope with the consistency problem in the predictions, \cite{Rulli'ere} suggested the nested pointwise aggregation of experts (NPAE), which provides consistent predictions but increases the time complexity. The authors of \cite{Liu} proposed the generalized robust Bayesian committee machine (GRBCM) by considering one expert as a base expert, i.e., a global expert that is modified the RBCM to provide consistent predictions. The authors in \cite{Liu} showed that this modified RBCM is capable of providing consistent predictions, especially for the disjoint data partitioning regime. The idea behind the BCM and PoE families of methods is the conditional independence (CI) assumption between the local experts. These divide-and-conquer approaches can speed up the computation and provide a distributed learning framework. However, since the CI assumption is violated in practice, they return poor results in cases with dependent experts. The key contribution of our work lies in considering the dependency between Gaussian experts and improve the prediction quality in an efficient way. To this end, we first develop an approach to detect the conditional correlation between Gaussian experts, and then we modify the aggregation using this knowledge. In the first step, a continuous form of a Markov random field is used to infer dependencies and then the expert set is divided into clusters of dependent experts. In the second step, we adopt GRBCM for this new scenario and present a new aggregation method that is accurate and efficient and leads to better predictive performance than other SOTA approaches, which use the CI assumption. The structure of the paper is as follows. Section II introduces the GP regression problem and SOTA DGP approaches. In Section III the proposed model and the inference process are presented. Section IV shows the experimental results, and we conclude in Section V. \section{Problem Set-up} \subsection{Background} Let us consider the regression problem $y=f(x)+\epsilon$, where $x\in R^D$ and $\epsilon \sim \mathcal{N}(0,\sigma^2)$, and the Gaussian likelihood is $p(y|f)=\mathcal{N}(f, \sigma^2 I)$. The objective is to learn the latent function \textit{f} from a training set $\mathcal{D}=\{X,y\}_{i=1}^n$. The Gaussian process regression is a collection of function variables any finite subset of which has a joint Gaussian distribution. The GP then describes a prior distribution over the latent functions as $f \sim GP\left(m(x),k(x,x^{'}) \right)$, where $m(x)$ is a mean function and $k(x,x^{'})$ is the covariate function (kernel) with hyperparameter $\psi$. The prior mean is often assumed as zero, and the kernel is the well-known squared exponential (SE) covariance function equipped with automatic relevance determination (ARD), \[k(x,x^{'})=\sigma_f^2 \; \exp\left( -\frac{1}{2} \sum_{i=1}^D \frac{(x_i-x_i^{'})^2}{\mathcal{L}_i} \right)\] where $\sigma_f^2$ is a signal variance, and $\mathcal{L}_i$ is an input length-scale along the \textit{i}th dimension, and $\psi=\{\sigma_f^2,\mathcal{L}_1,\ldots,\mathcal{L}_D\}$. To train the GP, the hyper-parameters $\theta=\{\sigma^2, \psi\}$ should be determined such that they maximise the log-marginal likelihood \cite{Rasmussen} \begin{equation} \label{eq:gp_likelihood} \log\, p(y|X)=-\frac{1}{2}y^T\mathcal{C}^{-1}y - \frac{1}{2}\log|\mathcal{C}|- \frac{n}{2}\log(2\pi) \end{equation} where $\mathcal{C}=K+\sigma^2I$. For a test set $x^*$ of size $n_t$, the predictive distribution is also a Gaussian distribution $p(y^*|D,x^*)\sim \mathcal{N}(\mu^*,\Sigma^*)$, with mean and covariance respectively given by \begin{equation} \label{eq:gp_mean} \mu^*=k_*^T(K+\sigma^2I)^{-1}y, \end{equation} \begin{equation} \label{eq:gp_var} \Sigma^*=k_{**} - k_*^T(K+\sigma^2I)^{-1}k_*, \end{equation} where $K=k(X,X)$, $k_*=k(X,x^*)$, and $k_{**}=k(x^*,x^*)$. According to \eqref{eq:gp_likelihood}, the training step scales as $\mathcal{O}(n^3)$ because it is affected by the inversion and determinant of $\mathcal{C}$, which is an $n \times n$ matrix. Therefore, for large datasets, training is a time-consuming task and imposes a limitation on the scalability of the GP. \subsection{Distributed Gaussian Process} To scale the GP to large datasets, the cost of the standard GP is reduced by distributing the training process. It involves dividing the full training dataset $\mathcal{D}$ into $M$ partitions $\mathcal{D}_1,\ldots,\mathcal{D}_M$, (called experts) and training the standard GP on these partitions. The predictive distribution of the i'th expert $\mathcal{M}_i$ is $p_i(y^*|\mathcal{D}_i,x^*)\sim \mathcal{N}(\mu_i^*,\Sigma_i^*)$, where its mean and variance are calculated by using \eqref{eq:gp_mean} and \eqref{eq:gp_var} respectively \begin{equation} \label{eq:experts_mean} \mu_i^*=k_{i*}^T(K_i+\sigma^2I)^{-1}y_i, \end{equation} \begin{equation} \label{eq:experts_var} \Sigma_i^*=k_{**} - k_{i*}^T(K_i+\sigma^2I)^{-1}k_{i*}. \end{equation} Aggregating these experts is based on the assumption that they are independent. The most prominent aggregation methods are PoE \cite{Hinton} and BCM \cite{Tresp}. GPoE \cite{Cao} and RBCM \cite{Deisenroth2015} are new modified versions of PoE and BCM, which approach the discontinuity problem and overconfident predictions. The term distributed Gaussian process was proposed by \cite{Deisenroth2015} to include PoE, BCM, and their derivatives, which are all based on the fact that the computations of the standard GP is distributed amongst individual computing units. Unlike sparse GPs, DGPs make use of the full dataset but divide it into individual partitions. The predictive distribution of DGP is given as the product of multiple densities (i.e., the experts). If the experts $\{\mathcal{M}\}_{i=1}^M$ are independent, the predictive distribution of DGP for a test input $x^*$ is \begin{equation} \label{eq:gpoe} p(y^*|\mathcal{D},x^*) \propto \prod_{i=1}^M p_i^{\beta_i}(y^*|\mathcal{D}_i,x^*). \end{equation} The weights $\beta = \{\beta_1,\ldots,\beta_M\}$ describe the importance and influence of the experts. The typical choice of the weights is the difference in differential entropy between the prior $p(y^*|x^*)$ and the posterior $p(y^*|\mathcal{D},x^*)$ \cite{Cao}. With such weights however, the predictions of GPoE are too conservative and the predictions are not appropriate \cite{Liu}. To address this issue, the simple uniform weights $\beta_i=\frac{1}{M}$ is used \cite{Deisenroth2015}. The predictive distribution of GPoE with normalized weights asymptotically converges to the full Gaussian process distribution but is too conservative \cite{Szabo}. \iffalse \paragraph{Product of experts} In the classical Product of Experts models, the desired probability distribution is given as the product of multiple densities (i.e., the experts). If the experts $\{\mathcal{M}\}_{i=1}^M$ are independent, the predictive distribution for a test input $x^*$ is \begin{equation} \label{eq:6} p(y^*|\mathcal{D},x^*)= \prod_{i=1}^M p_i^{\beta_i}(y^*|\mathcal{D}_i,x^*). \end{equation} The predictive distribution in (\ref{eq:6}) is a modified version of conventional PoE, i.e GPoE, proposed by \cite{Cao}. It employs the weights $\beta = \{\beta_1,\ldots,\beta_M\}$ which describe the importance and influence of the experts. To capture the difference in differential entropy between the prior and posterior distribution of each expert, \cite{Cao} suggested a varying value for $\beta_i$. With such weights however, the predictions of GPoE are too conservative and the predictions are not appropriate \cite{Liu}. To address this issue, the simple uniform weights $\beta_i=\frac{1}{M}$ is used \cite{Deisenroth2015}. The predictive distribution of GPoE with normalized weights asymptotically converges to the full Gaussian process distribution but is too conservative \cite{Szabo}. \\ \paragraph{Bayesian Committee Machine} Another DGP model is the Bayesian committee machine (BCM) \cite{Tresp}. It uses the Gaussian process prior $p(y^*)$ when it aggregates the predictions of the experts. It imposes a conditional independence assumption, i.e. $\mathcal{D}_i \perp \!\!\! \perp \mathcal{D}_j |y^*$ for two experts $i$ and $j$. Inspired by GPoE, \cite{Deisenroth2015} proposed the robust Bayesian committee machine (RBCM) to improve the prediction quality of BCM, especially on the regions with only few data points, by adding the importance weights $\beta_i$. The distributed predictive distribution of this family of models is \begin{equation} \label{eq:7} p(y^*|\mathcal{D},x^*)= \frac{\prod_{i=1}^M p_i^{\beta_i}(y^*|\mathcal{D}_i,x^*)}{p^{\sum_{i=1}^M \beta_i-1}(y^*)}. \end{equation} The typical choice of the weights is the difference in differential entropy between the prior $p(y^*|x^*)$ and the posterior $p(y^*|\mathcal{D},x^*)$, i.e. $\beta_i=\frac{1}{2}(log \Sigma^{**} - log \Sigma_i^{*})$. For $\beta_i=1$, the basic BCM model is recovered. In RBCM as well as GPoE, with varying $\beta_i$, the predictions can not recover the full GP prediction when $M=1$. In this case we should have $\beta_i=1$, while usually $\beta_i=\frac{1}{2}(log \Sigma^{**} - log \Sigma_{full}^{*}) \neq 1$.\\ \fi \subsection{Discussions of the Properties Existing Aggregations} \paragraph{Consistency} To deal with the inconsistency issue, the nested pointwise aggregation of experts (NPAE) \cite{Rulli'ere} considers the means of the local predictive distributions as random variables by assuming that $y_i$ has not been observed and therefore allows the dependency between individual experts' predictions. Theoretically, it provides consistent prediction but its aggregation steps need much higher time complexity. For M individual partitions, NAPE needs to calculate the inverse of a $M \times M$ matrix in each test point that leads to a longer running time when a large training set is used or the number of partitions is large. Another new model is the generalized robust Bayesian committee machine (GRBCM)~\cite{Liu} which introduces a base (global) expert and considers the covariance between the base and other local experts. For a global expert, $M_b$, in a base partition, ${D}_{b}$, the predictive distribution of GRBCM is \begin{equation} \label{eq:grbcm} p(y^*|\mathcal{D},x^*)= \frac{\prod_{i=2}^M p_{bi}^{\beta_i}(y^*|\mathcal{D}_{bi},x^*)}{p_b^{\sum_{i=2}^M \beta_i-1}(y^*|\mathcal{D}_b,x^*)} , \end{equation} where $p_b(y^*|\mathcal{D}_b,x^*)$ is the predictive distribution of $M_b$, and $p_{bi}(y^*|\mathcal{D}_{bi},x^*)$ is the predictive distribution of an expert trained on the dataset $\mathcal{D}_{bi}=\{\mathcal{D}_{b},\mathcal{D}_{i}\}$. It improves the prediction and consistency of the RBCM has time complexity $\mathcal{O}(\alpha nm^2_0) + \mathcal{O}(\beta n^{'}n m_0)$, where $m_0$ in the number of assigned points to each expert, $n^{'}$ is the size of test set, $\alpha=(8M-7)/M$, and $\beta=(4M-3)/M$\cite{Liu}. \\ \paragraph{Conditional independence (CI)} CI is a crucial assumption for many unsupervised ensemble learning methods. It has been used widely in regression and classification problems \cite{Moreira,Parisi}. The (R)BCM and (G)PoE methods are also based on CI assumption which reduces the computational costs of the training process, see Figure \ref{fig.1.a}. However, in practice, this assumption is often violated and their predictions are not accurate enough. Actually, the ensembles based on CI return sub-optimal solutions \cite{Jaffe}. In this regard, few works have considered modelling dependencies between individual predictors. For classification, \cite{Donmez} used pairwise interaction between classifiers and \cite{Platanios} considered the agreement rates between subsets of experts. In another work~\cite{Jaffe}, the authors suggested a model using clusters of binary classifiers in which the classifiers in each cluster are conditionally dependent. The authors of~\cite{Jaffe} defined a specific score function based on covariance between classifiers to detect dependency. In local approximation GPs, the only method that considered the dependency between experts is NPAE\cite{Rulli'ere, Bachoc}. It assumes the joint distribution of experts and $y^*$ is a Gaussian distribution and uses the properties of conditional Gaussian distributions to define the meta-learner. Due to the high computational cost which cubically depends on the number of experts at each test point, this method does not provide an efficient solution for large real-world datasets. In the next section, we propose a new model that uses the dependency between experts and define a modified aggregation method based on GRBCM. \section{Distributed Gaussian Process with Dependent Experts} Assume the Gaussian experts $\mathcal{M}=\{\mathcal{M}_1,\ldots,\mathcal{M}_M\}$ have been trained on different partitions and let $\mu_{\mathcal{M}}^*=[\mu_1^*,\ldots,\mu_M^*]^T$ be a $ n_t \times M$ matrix that contains the local predictions of $M$ experts at $n_t$ test points. Our approach makes use of the experts' predictions, i.e. $\mu_{\mathcal{M}}^*$ in order to detect strong dependencies between experts. This step results clusters of correlated experts, $\mathcal{C}=\{\mathcal{C}_1,\ldots,\mathcal{C}_P\}$, $P \ll M$. By aggregating the experts at each cluster, it leads to a new layer of experts, $\mathcal{K}=\{\mathcal{K}_1,\ldots,\mathcal{K}_P\}$, which are conditionally independent given $y^*$. Figure \ref{fig.1.b} depicts this model where the experts in cluster $\mathcal{C}_i$ are conditionally independent given $\mathcal{K}_i$, and each $\mathcal{K}_i$ is independent of $\mathcal{K}_j, i\neq j$ given $y^*$. The final prediction is done by using the $\mathcal{K}$ instead of $\mathcal{M}$.\vspace{2mm} \begin{figure}[hbt!] \centering \subfloat[]{\includegraphics[width=0.8\columnwidth]{./plots/dgp.jpg}% \label{fig.1.a}} \hfil \subfloat[]{\includegraphics[width=0.8\columnwidth]{./plots/cdgp.jpg}% \label{fig.1.b}} \caption{ \textbf{Computational graphs}: (a) DGP model with CI \cite{Deisenroth2015}; (b) DGP with clusters of dependent experts \cite{Jaffe}} \label{fig.1} \end{figure} \begin{definition}[\textbf{Assignment function}] \label{def.1} A function $\mathcal{H}: \mathcal{M} \to \mathcal{C}$ is the assignment function that represents the related cluster for each expert. $\mathcal{H}(\mathcal{M}_i)=\mathcal{C}_j$ means that the $i$'th expert belongs to $j$'th cluster of experts and it has a dependency with the experts in this cluster. Therefore, if $\mathcal{H}(\mathcal{M}_i)=\mathcal{H}(\mathcal{M}_j)$, the $i$'th and $j$'th experts are correlated to each other and belong to the same cluster. \end{definition} In the following, we will show how to detect subsets of strongly dependent experts and present a new aggregation method for DGPs. \subsection{Dependency Detection with Gaussian Graphical Models} The key idea in an undirected graphical model (or pairwise Markov random field (MRF)) is to model the set of local estimators as a connected network such that each node represents a Gaussian expert and the edges are the interaction between them. This network model uses a matrix of parameters to encode the graph structure. In other words, it considers the edges as parameters, such that if there is a connection between two nodes, then there are non-zero parameters for the pair. \paragraph{Gaussian graphical models (GGMs).} Gaussian graphical models (GGMs) \cite{Rue,Uhler,Drton} are continuous forms of pairwise MRFs, i.e. the nodes are continuous. The basic assumption for GGMs is that the variables in the network follow a multivariate Gaussian distribution. The distribution for GGMs is \begin{equation} \label{eq:ggm_variance} p(Z|\mu, \Sigma)= \\ \frac{1}{(2\pi)^{Q/2}|\Sigma|^{1/2}} \exp\left\{-\frac{1}{2}(Z-\mu)^T\Sigma^{-1}(Z-\mu) \right\}, \end{equation} where $Z=\{Z_1,\ldots,Z_Q\}$ are the variables (nodes), Q is the number of variables, and $\mu$ and $\Sigma$ are the mean and covariance, respectively. The distribution in \eqref{eq:ggm_variance} can also be expressed using the precision matrix $\Omega$: \begin{equation} \label{eq:ggm_precision} \begin{split} p(Z|h, \Omega)=& \frac{|\Omega|^{1/2}}{(2\pi)^{Q/2}} \exp\left\{-\frac{1}{2}(Z-h)^T\Omega(Z-h) \right\} \\ \propto & \exp\left\{-\frac{1}{2}Z^T\Omega Z + h^T Z \right\} , \end{split} \end{equation} where $\Omega=\Sigma^{-1}$ and $h=\Omega \mu$. The matrix $\Omega$ is also known as the potential or information matrix. WLOG, let $\mu= 0$, then the distribution of a GGM shows the potentials defined on each node $i$ as $exp\{-\Omega_{ii}(Z_i)^2\}$ and on each edge $(i,j)$ as $exp\{-\Omega_{ij} Z_i Z_j\}$. Unlike correlation networks, Eq.\ \eqref{eq:ggm_variance}, which encode the edge information in the network on the covariance matrix, a GGM is based on the precision matrix, Eq.\ \eqref{eq:ggm_precision}. In a correlation network, if $\Sigma_{ij}=0$, then $Z_i$ ard $Z_j$ are assumed to be independent. While in a GGM, if $\Omega_{ij}=0$, then $Z_i$ ard $Z_j$ are conditionally independent given all other variables, i.e. there is no edge between $Z_i$ ard $Z_j$ in the graph.\\ \paragraph{Network Learning.} In the network, the locally trained experts are the nodes, and network learning results in the precision matrix $\Omega$; the latter reveals the conditional dependencies between experts. GGMs use the common sparsity assumption, that is, there are only few edges in the network and thus the parameter matrix is sparse. This assumption usually makes sense in experts' networks because the interaction of one expert is limited to only a few other experts. To this end, the Lasso regression \cite{Hastie2015} is used to perform neighborhood selection for the network. The Meinshausen-Bühlmann algorithm \cite{Meinshausen2006} is one of the first algorithms in this area. \cite{Meinshausen2006} and \cite{Wainwright} proved that with some assumptions, Lasso asymptotically recovers correct relevant subsets of edges. \cite{Friedman2008} proposed the efficient graphical Lasso which adopts a maximum likelihood approach subject to an L1 penalty on the coefficients of the precision matrix. The graphical Lasso has been improved in later works \cite{Friedman2010,Friedman2011,Hallac}. Let $S$ be the sample covariance. Then, the Gaussian log-likelihood of the precision matrix $\Omega$ is equal to $log |\Omega| - trace(S \Omega)$. The Graphical Lasso (gLasso) maximizes this likelihood subject to an element-wise $L_1$ norm penalty on $\Omega$. Precisely, the objective function is \begin{equation} \label{eq:glasso} \hat{\Omega}= \arg\max_{\Omega} \log |\Omega| - trace(S \Omega) - \lambda \left\Vert \Omega \right\Vert_1, \end{equation} where the estimated neighborhood is then the non-zero elements of $\hat{\Omega}$. Since $\hat{\Omega}$ contains all information about the dependency between experts, we use it to construct the assignment function $\mathcal{C}$ and the clusters of experts $\mathcal{K}$. \subsection{Aggregation} After determining the dependencies between the experts, we apply the following aggregation method. First, we define clusters of interdependent experts, i.e. that include experts with strong dependency. Then, by using the GRBCM method in $i$th cluster, we generate for each cluster a modified expert $\mathcal{K}_i$. The final prediction is done by aggregating the predictions of these modified experts. \paragraph{Experts clustering} After detecting the dependencies between experts, we use the precision matrix to find the assignment function. Performing a clustering approach on the precision matrix returns the clusters of experts $\mathcal{K}$; thus, each cluster $\mathcal{K}_i$ contains strongly dependent experts based on the precision matrix. To this end, we apply \emph{spectral clustering (SC)} \cite{Luxburg} which is more robust and works better in practice. Spectral clustering makes use of the relevant eigenvectors of the Laplacian matrix of the similarity matrix (here the precision matrix) alongside a standard clustering method. The Laplacian matrix is $L=D-\Omega$, where $D$ is a diagonal matrix that includes the sum of the values in each row of $\Omega$. \begin{figure}[hbt!] \centering \subfloat[ $\lambda =0.1$]{\includegraphics[width=0.8\columnwidth]{./plots/net1.pdf}% \label{fig.2.a}} \hfil \subfloat[$\lambda =0.1, P=6$]{\includegraphics[width=0.8\columnwidth]{./plots/net22.pdf}% \label{fig.2.b}} \hfil \subfloat[$\lambda =0.1$]{\includegraphics[width=\columnwidth]{./plots/net3.pdf}% \label{fig.2.c}} \caption{\textbf{Gaussian graphical models}:(a) shows the interaction between experts in a GGM of 20 experts with a penalty term $\lambda =0.1$, (b) reveals the GGM with 6 clusters of experts, and (c) depicts the \emph{heat map} plot of the experts' precision matrix.} \label{fig.2} \end{figure} Figure \ref{fig.2} depicts the GGM of a simulated dataset with $10^5$ training points which have been divided into 20 partitions (experts). This dataset is considered in Section \ref{f_x} in detail. Figure \ref{fig.2.a} represents the sparse graph with a penalty term $\lambda =0.1$ in graphical Lasso with the nodes (experts) and edges (interactions or dependencies). Even with this penalty term, the CI assumption is violated because all experts are connected to each other. Figure \ref{fig.2.b} displays the graph after performing the spectral clustering on the precision matrix. The 6 clusters in the graph contain correlated experts, and clusters that are now represented by only one expert (e.g. the cluster with red color) contain the original experts that are not strongly dependent with the other experts. Figure \ref{fig.2.c} represents the \emph{heat map} plot of the symmetric precision matrix and shows the conditional dependencies between experts. The main diagonal returns the experts internal potential while the other elements are conditional dependencies between experts. In fact this figure shows that the experts are conditionally dependent and the CI assumption is violated. \paragraph{Final Aggregation} We assume that the new experts $\{\mathcal{K}_1,\ldots,\mathcal{K}_P\}$ are conditionally independent given $y^*$ (see Figure \ref{fig.1.b}), which is not a strong assumption due to the process by which they were generated. The task is to find the distribution of new experts $\{\mathcal{K}_1,\ldots,\mathcal{K}_P\}$ and then find $p(y^*|\mathcal{D},x^*)$. The authors in \cite{Liu} showed that GRBCM provides consistent predictions under some mild assumptions, i.e. it can recover the true posterior distribution of $y^*$ when $n \to \infty$. Hence, we use the GRBCM aggregation method in each cluster by adding the global communication expert $M_b$ to all clusters. For aggregating the new experts, we use either GPoE or GRBCM. Since the number of experts that are aggregated in each step is smaller than $M$, the computational cost of this scenario is smaller than the computational cost of GRBCM. Algorithm \ref{alg:1} depicts the aggregation process. \begin{algorithm}[H] \caption{Aggregating Dependent Local Gaussian Experts} \label{alg:1} \begin{algorithmic}[1] \REQUIRE {$\mu^*_M$, $\lambda$, P} \STATE Calculate sample covariance S of experts' predictions \STATE Estimate $\hat{\Omega}$ using \eqref{eq:glasso} \STATE Estimate $\mathcal{H}$ by performing spectral clustering \emph{SC}($\hat{\Omega}$, P) \STATE Obtain new experts $\{\mathcal{K}_1,\ldots,\mathcal{K}_P\}$ using GRBCM \eqref{eq:grbcm} \STATE Aggregate new experts using GPoE \eqref{eq:gpoe} or GRBCM \eqref{eq:grbcm} \RETURN The estimated mean and variance of $p(y^*|\mathcal{D},x^*,{\mathcal{K}})$, i.e. $\mu^*_{\mathcal{K}}$ and $\Sigma^*_{\mathcal{K}}$ \end{algorithmic} \end{algorithm} The following proposition gives our predictive distribution and its asymptotic properties. \begin{Proposition}[\textbf{Predictive Distribution}] Let $X$ be a compact, nonempty subset of $\mathcal{R}^{n \times D}$, $\mu^*_M=[\mu_1^*,\ldots,\mu_M^*]^T$ be the sub-models' predictions. We use $\{\mathcal{K}_1,\ldots,\mathcal{K}_P\}$ as defined in Algorithm \ref{alg:1}. We further assume that (i) $\lim_{n\to \infty} M = \infty$, (ii) $\lim_{n\to \infty} m_0 = \infty$, where $m_0$ is the partition size, and (iii) $\lim_{n\to \infty} |\mathcal{C}_i| = \infty,\; i=1,\ldots,P$, where $|\mathcal{C}_i|$ is the size of $i$'th cluster. The second condition implies that the original experts become more informative with increasing $n$, while the third condition means that the number of experts in each cluster increases. In addition, the third condition implies that $P \ll M$, which describes the dependency between the experts. \footnote{If we assume perfect diversity between experts (i.e., CI), then $P \approx M$. In this case, the consistency still holds due to the consistency of GRBCM but it is not a realistic assumption.} Then the estimation based on Algorithm \ref{alg:1}, $y^*_{\mathcal{K}}$ is consistent, i.e. \begin{equation} \label{eq:consistent_prediction} \begin{cases} \lim_{n\to \infty} \mu^*_{\mathcal{K}}=\mu^* \\ \lim_{n\to \infty} \Sigma^*_{\mathcal{K}}=\Sigma^*. \end{cases} \end{equation} \end{Proposition} \textit{Proof:} The proof is straightforward due to the consistency of GRBCM. According to assumptions (ii) and (iii), when $n \to \infty$, each cluster returns a consistent predictor because the aggregation inside the clusters is based on GRBCM. Combining the consistent new experts $\{\mathcal{K}_1,\ldots,\mathcal{K}_P\}$ in Step 5 of Algorithm \ref{alg:1} leads to a consistent prediction. We provide here the proof for the variance, when GPoE is used in Step 5, and note that the proof for the mean is analogous. Let $\Sigma_{\mathcal{K}_i}^*$ be the covariance matrix of $\mathcal{K}_i$ which is obtained in Step 4 of Algorithm \ref{alg:1}, then the aggregated precision of GPoE (Step 5 of Algorithm \ref{alg:1}) is equal to \begin{align*} \lim_{n\to \infty} (\Sigma_{\mathcal{K}}^*)^{-1} = & \lim_{n\to \infty} \sum_{i=1}^P \frac{1}{P} (\Sigma_{\mathcal{K}_i}^*)^{-1} = \sum_{i=1}^P \frac{1}{P} \lim_{n\to \infty}(\Sigma_{\mathcal{K}_i}^*)^{-1} \\ =& \sum_{i=1}^P \frac{1}{P} (\Sigma^*)^{-1} =(\Sigma^*)^{-1}, \end{align*} where the first equality is based on the definition of GPoE with equal weights and the third one is due to the consistency of GRBCM. In the next section, we showcase the importance of taking local experts' dependencies into account and the competitive performance of our approach using both artificial and real-world datasets. \section{Experiments} \label{experiments} The prediction quality of the proposed dependent Gaussian expert aggregation method (DGEA) is assessed in this Section. We showcase the importance of taking local experts' dependencies into account and the competitive performance of our approach using both artificial and real datasets. The quality of predictions is evaluated in two ways, standardized mean squared error (SMSE) and the mean standardized log loss (MSLL). The SMSE measures the accuracy of prediction mean, while the MSLL evaluates the quality of predictive distribution \cite{Rasmussen}. The standard squared exponential kernel with automatic relevance determination and a Gaussian likelihood is used. The experiments have been done in MATLAB using the GPML package\footnote{\url{http://www.gaussianprocess.org/gpml/code/matlab/doc/}}. The random partitioning method on the training dataset has been used in all experiments. \subsection{Toy Example} \label{f_x} The goal of our first experiment is to study the effect of dependency detection on the prediction quality and computation time. It is based on simulated data of a one-dimensional analytical function \cite{Liu}, \begin{equation} f(x) = 5x^2\sin(12x) + (x^3 -0.5)\sin(3x-0.5)+4\cos(2x) + \epsilon, \label{eq:toy_example} \end{equation} where $\epsilon \sim \mathcal{N}\left(0, (0.2)^2\right)$. We generated $n=10^4$ training points in $[0,1]$, and $n_t=0.1n$ test points in $[-0.2,-1.2]$. The data is normalized to zero mean and unit variance. We assigned 200, 250, 330, 500 and 1000 data points to each expert, which leads to 50, 40, 30, 20, and 10 experts respectively. Figure \ \ref{fig.3} shows the sensitivity of different DGP methods with respect to the change in the number of experts. \begin{figure}[hbt!] \centering \subfloat[ SMSE]{\includegraphics[width=0.85\columnwidth]{./plots/smse_1.pdf}% \label{fig.3.a}} \hfil \subfloat[MSLL]{\includegraphics[width=0.85\columnwidth]{./plots/msll_1.pdf}% \label{fig.3.b}} \hfil \subfloat[Time]{\includegraphics[width=0.85\columnwidth]{./plots/time_1.pdf}% \label{fig.3.c}} \caption{\textbf{Prediction quality} of different DGP methods with respect for different number of experts in the simulated data of the analytic function by \eqref{eq:toy_example}.} \label{fig.3} \end{figure} \begin{table*}[hbt!] \caption{\textbf{Prediction quality} for various methods on \textit{Pumadyn}, \textit{Kin40k}, \textit{Sacros}, and \textit{Song} data. For both quality measure, i.e. SMSE and MSLL, smaller values are better.} \label{table.1} \vskip -5 mm \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccccccccr} \toprule \multirow{2}{*}{} & \multicolumn{2}{c}{\textit{Pumadyn}} & \multicolumn{2}{c}{\textit{Kin40k}} & \multicolumn{2}{c}{\textit{Sacros}} & \multicolumn{2}{c}{\textit{Song}} & \\ \toprule Model & SMSE & MSLL & SMSE & MSLL & SMSE & MSLL& SMSE & MSLL \\ \midrule DGEA (Ours) & \textbf{0.0486} & \textbf{-1.5133} & \textbf{0.0538} & \textbf{-1.3025} & \textbf{0.0269} & \textbf{-1.823} & \textbf{0.8084} & \textbf{-0.122} \\ PoE & 0.0505 & 4.8725 & 0.856& 2.4153 & 0.0311& 25.2807 & 0.8169 & 69.9464 \\ GPoE & 0.0505 & -1.4936 & 0.0856& -1.2286 & 0.0311& -1.7756 & 0.8169 &\textbf{-0.123} \\ BCM & 0.0499 & 4.6688 & 0.0818 & 1.6974 & 0.0308 & 24.868 & 10.4291& 44.1745 \\ RBCM & 0.0498 & 12.1101 & 0.0772& 2.5256 & 0.0305& 61.5392 & 5.4373& 1.2089 \\ GRBCM & 0.0511 & -1.488 &0.0544& -1.2785 & 0.0305& -1.4308 & 0.8268& 0.2073 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} The DGEA prediction is based on using GPoE in Step 5 of Algorithm \ref{alg:1}. Figure \ref{fig.3.a} depicts the SMSE values of the different SOTA methods. Since PoE and GPoE have the same SMSE value, the line of PoE is hidden in the plot. By increasing the number of experts, the prediction error rises, because we assign a smaller amount of observations to each expert, and therefore the quality of each expert decreases. While (G)PoE and (R)BCM return poor predictions, the DGEA and GRBCM present better results and DGEA has the smallest prediction error for the different numbers of experts. Figure \ref{fig.3.b} reveals the quality of the predictive distribution. The overconfident methods (PoE, BCM, and RBCM) return smaller MSLL for 10 experts. However, their MSLL value dramatically increases with growing $M$. As the authors in \cite{Bachoc} and \cite{Szabo} have shown, the GPoE is a conservative method and thus returns a higher quality. In Figure \ref{fig.3.b} we can see that its predictive distribution has a higher quality compared to PoE and (R)BCM. However, the DGEA represents an even higher quality for the predictive distribution for the different values of $M$. Figure \ref{fig.3.c} shows the computational costs of different methods. Comparing the running time of DGEA and GRBCM demonstrates that DGEA takes about half of the time of GRBCM, while its running time is almost indistinguishable from the running time of the most efficient methods, (G)PoE and (R)BCM. \subsection{Realistic Datasets} \label{real_datasets} In this section, we use four realistic datasets, \textit{Pumadyn}, \textit{Kin40k}, \textit{Sacros}, and \textit{Song}. The \textit{Pumadyn}\footnote{\url{https://www.cs.toronto.edu/~delve/data/pumadyn/desc.html}} is a generated 32D dataset with 7168 training points and 1024 test points. The 8D \textit{Kin40k} dataset \cite{Seeger} contains $10^4$ training points and $3\times10^4$ test points. The \textit{Sacros}\footnote{\url{http://www.gaussianprocess.org/gpml/data/}} is a 21D realistic medium-scale dataset with 44484 training and 4449 test points. The \textit{Song} dataset \footnote{\url{https://archive.ics.uci.edu/ml/datasets/yearpredictionmsd}} \cite{Bertin-Mahieux} is a 91D dataset with 515,345 instances which is divided into 463,715 training examples and 51,630 test examples. We extract the first $10^5$ songs from this dataset for training and keep the original set of 51,630 songs for testing. The random partitioning method has been used to divide the dataset into partitions and to generate the experts. The number of experts is 20 for \textit{Pumadyn} and \textit{Kin40k}, 72 for \textit{Sacros}, and 150 for \textit{Song}. For the \textit{Pumadyn} and \textit{Kin40k} datasets, 5 clusters, and for \textit{Sacros} and \textit{Song}, 10 clusters are used. Table \ref{table.1} depicts the prediction quality of different methods. In the \textit{Pumadyn}, \textit{Kin40k}, and \textit{Sacros} dataset, DGEA clearly outperforms the other methods. BCM and RBCM show lower prediction error compared to (G)PoE on these datasets, but their negative log-likelihood (MSLL) is quite large. Since GPoE provides conservative predictions and its posterior distribution converges to the true predictive distribution, it shows a nice performance with respect to the MSLL value, even better than the performance of GRBCM on the \textit{Pumadyn}, and \textit{Sacros} datasets. The drawback of PoE and (R)BCM methods can be seen in their MSLL values, which shows that their predictive distribution does not have competitive quality and tends to produce overconfident and inconsistent predictions, which has also been discussed by \cite{Szabo} and \cite{Liu}. With respect to the \textit{Song} dataset, DGEA and GPoE return better predictions. While GPoE performs a little bit better than DGEA with respect to MSLL, DGEA has a lower prediction error. The GRBCM method returns different prediction qualities for different detests. Since in this work a new ensemble method is proposed for the non-parametric regression problem, the random partitioning is used, because in this case all Gaussian experts can cover the full sample space and work as global predictors. The quality of GRBCM is higher with respect to the disjoint partitioning, which is consistent with the results presented in~\cite{Liu}. But overall, the prediction quality of DGEA outperforms the other methods, which shows the importance of taking the experts' dependencies into account. \section{Conclusion} In this work, we have proposed DGEA, a novel DGP approach which leverages the dependencies between experts to improve the prediction quality through local aggregation of experts' predictions. To combine correlated experts, comparable SOTA methods assume conditional independence between experts, which leads to poor prediction in practice. Our approach uses an undirected graphical model to detect strong dependencies between experts and defines clusters of interdependent experts. Theoretically, we showed that our new local approximation approach provides consistent results when $n \to \infty$. Through empirical analyses, we illustrated the superiority of DGEA over existing SOTA aggregation methods for scalable GPs. For future work, we identify two directions for further research. First, for the aggregated posterior, we integrated Gaussian graphical models into the generalized robust Bayesian committee machine and generalized product of experts. Another aggregation approach can be the latent variable graphical model, assuming that the final predictor is a latent variable within the graph that may improve the prediction quality by using interdependencies between experts while reducing time complexity. Second, the GGM relies on the assumption that all experts are jointly Gaussian and cannot be used to explain complex models with the non-Gaussian distribution. Therefore, finding a flexible and capable substitute for the GGM to capture the properties of GP experts is left to future work. \bibliographystyle{./IEEEtran}
proofpile-arXiv_059-15752
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Graphs are a flexible representation, widely used for representing diverse data and phenomena. In recent years, graph neural networks (GNNs), deep models that operate over graphs, have become the leading approach for learning on graph-structured data \citep{bruna2013spectral,kipf2016semi,velivckovic2017graph,gilmer2017neural}. In many domains, graphs vary significantly in size. This is the case in molecular biology, where molecules are represented as graphs and their sizes vary from few-atom compounds to proteins with thousands of nodes. Graph sizes are even more heterogeneous in social networks, {ranging from dozens of nodes to billions of nodes. Since a key feature of GNNs is that they can operate on graphs regardless of their size,} a fundamental question arises: \textbf{"When do GNNs generalize to graphs of sizes that were not seen during training?"}. Aside from being an intriguing theoretical question, the size-generalization problem has important practical implications. In many domains, it is hard to {collect ground-truth labels for} large graphs. For instance, many combinatorial optimization problems can be represented as graph {classification} problems, but labeling large graphs for training may require solving large and hard optimization problems. In other domains, it is often very hard for human annotators to correctly label complex networks. It would therefore be highly valuable to develop techniques that can train on small graphs and generalize to larger graphs. This first requires that we develop an understanding of size generalization. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{fig1.png} \caption{We study the ability of GNNs to generalize from small to large graphs, focusing on graphs in which the local structure depends on the graph size. The figure shows two graph distributions that differ in size and degree distribution. We show that when the local structures in the test set are different from the local structures in the training set, it is difficult for GNNs to generalize. Additionally, we suggest ways to improve generalization. } \label{fig:fig1} \end{figure} In some cases, GNNs can naturally generalize to graphs whose size is different from what they were trained on, but it is largely unknown when such generalization occurs. Empirically, several papers report good size-generalization performance \citep{li2018combinatorial, luz2020learning, sanchez2020learning}. Other papers \cite{velickovic2019deep, khalil2017learning, joshi2020learning} show that size generalization can be hard. Recently, \citet{xu2020neural} provided theoretical evidence of size generalization capabilities in one-layer GNNs. The current paper characterizes an important type of graph distributions where size generalization is challenging. Specifically, we analyze graphs for which the distribution of local structures (defined formally in \secref{sec:local graph patterns}) depends on the size of the graph. See \figref{fig:fig1} for an illustrative example. This dependency is prevalent in a variety of graphs, including for instance, the preferential attachment (PA) model \cite{barabasi1999emergence}, which captures graph structure in social networks \cite{barabasi2002evolution}, biological networks \cite{eisenberg2003preferential,light2005preferential} and internet link data \cite{capocci2006preferential}. In PA, the maximal node degree grows with the graph size. As a second example, in a graph representation of dense point clouds, the node degree grows with the cloud density, and hence with the graph size \cite{hermosilla2018monte}. To characterize generalization to new graph sizes, we first formalize a representation of local structures that we call $d$-patterns, inspired by \cite{weisfeiler1968reduction,morris2019weisfeiler, xu2018powerful}. $d$-patterns generalize the notion of node degrees to a $d$-step neighborhood of a given node, capturing the values of a node and its $d$-step neighbors, as seen by GNNs. We then prove that even a small discrepancy in the distribution of $d$-patterns between the test and train distributions may result in weight assignments that do not generalize well. Specifically, it implies that when training GNNs on small graphs there exist "bad" global minima that fail to generalize to large graphs. We then study empirically the relation between size generalization and $d$-pattern discrepancy in synthetic graphs where we control the graph structure and size. We find that as $d$-pattern discrepancy grows, the generalization of GNNs to new graph sizes deteriorates. Finally, we discuss approaches for improving size generalization. We {take a self-supervised learning approach and} propose a novel pretext task aimed at learning useful $d$-pattern representations from both small and large graphs. We show that when training on labeled small graphs and with our new self-supervised task on large graphs, classification accuracy increases on large graphs by $4\%$ on average on real datasets. This paper makes the following contributions: (1) We identify a family of important graph distributions where size generalization is difficult, using a combination of theoretical and empirical results. (2) We suggest approaches for improving size generalization when training on such distributions and show that they lead to a noticeable performance gain. The ideas presented in this paper can be readily extended to other graph learning setups where there is a discrepancy between the local structures of the train and test sets. \section{Preliminaries}\label{sec:preliminaries} \textbf{Notation.} We denote by $\{(a_1, m_{a_1}),\dots,(a_n, m_{a_n})\}$ a \textbf{multiset}, that is, a set where we allow multiple instances of the same element. Here $a_1,\dots,a_n$ are distinct elements, and $m_{a_i}$ is the number of times $a_i$ appears in the multiset. Bold-face letters represent vectors. \textbf{Graph neural networks.} In our theoretical results, we focus on the message-passing architecture from \cite{morris2019weisfeiler}. Let $G=(V,E)$ be a graph, and for each node $v\in V$ let $\mathbf{h}^{(0)}_v\in\mathbb{R}^{d_0}$ be a node feature vector and $\mathcal{N}(v)$ its set of neighbors. The $t$-th layer of first-order GNN is defined as follows for $t>0$: \begin{equation*} \mathbf{h}^{(t)}_v = \sigma\left( W_2^{(t)} \mathbf{h}^{(t-1)}_v + \sum_{u\in\mathcal{N}(v)} W_1^{(t)} \mathbf{h}^{(t-1)}_u + \mathbf{b}^{(t)}\right). \end{equation*} Here, $W_1^{(t)},W_2^{(t)}\in\mathbb{R}^{d_{t}\times d_{t-1}},~ \mathbf{b}^{(t)}\in \mathbb{R}^{d_t}$ denotes the parameters of the $t$-th layer of the GNN, and $\sigma$ is some non-linear activation (e.g ReLU). It was shown in \cite{morris2019weisfeiler} that GNNs composed from these layers have maximal expressive power with respect to all message-passing neural networks. For node prediction, the output of a $T$-layer GNN for node $v$ is $\mathbf{h}_v^{(T)}$. For graph prediction tasks an additional readout layer is used: $g^{(T)} = \sum_{v\in V} \mathbf{h}_v^{(T)}$, possibly followed by a fully connected network. \textbf{Graph distributions and local structures.} In this paper we focus on graph distributions for which the local structure of the graph (formally defined in \secref{sec:local graph patterns}) depends on the graph size. A well-known distribution family with this property is $G(n,p)$ graphs, also known as Erd\H{o}s-R\'enyi. A graph sampled from $G(n,p)$ has $n$ nodes, and edges are drawn i.i.d. with probability $p$. The mean degree of each node is $n\cdot p$; hence fixing $p$ and increasing $n$ changes the local structure of the graph, {specifically the node degrees}. As a second example, we consider the preferential attachment model \cite{barabasi1999emergence}. Here, $n$ nodes are drawn sequentially, and each new node is connected to exactly $m$ other nodes, where the probability to connect to other nodes is proportional to their degree. As a result, high degree nodes have a high probability that new nodes will connect to them. Increasing the graph size, causes the maximum degree in the graph to increase, and thus changes its local structure. We also show that, in real datasets, the local structures of small and large graphs differ. This is further discussed in \secref{sec:improve size gen} and \appref{appen:counting patterns}. \section{Overview} \subsection{The size-generalization problem} We are given two distributions over graphs $P_{1}, P_{2}$ that contain small and large graphs accordingly, and a task that can be solved for all graph sizes using a GNN. We train a GNN on a training set $\mathcal{S}$ sampled i.i.d. from $P_{1}$ and study its performance on $P_{2}$. In this paper, we focus on distributions that have a high discrepancy between the local structure of the graphs sampled from $P_1$ and $P_2$. \textbf{Size generalization is not trivial.} Before we proceed with our main results, we argue that even for the simple regression task of counting the number of edges in a graph, which is solvable for all graph sizes by a 1-layer GNN, GNNs do not naturally generalize {to new sizes}. Specifically, we show that training a 1-layer GNN on a non-diverse dataset reaches a non-generalizing solution with probability 1 over the random initialization. In addition, we show that, in general, the generalizing solution is not the least L1 or L2 norm solution, hence cannot be reached using standard regularization methods. See full derivation in \appref{appen:linear GNN}. \subsection{Summary of the main argument} This subsection describes the main flow of the next sections in the paper. We explore the following arguments: \textbf{(i) $d$-patterns are a correct notion for studying the expressivity of GNNs.} To study size generalization, we introduce a concept named $d$-patterns, which captures the local structure of a node and its $d$-step neighbors, as captured by GNNs. This notion is formally defined in Section \ref{sec:local graph patterns}. For example, for graphs without node features, a $1$-pattern {of a node} represents {its degree}, and {its} $2$-pattern {represents} its degree and the set of {degrees of its immediate neighbors}. We argue that $d$-patterns are a natural abstract concept for studying the expressive power of GNNs: first, we extend a result by \cite{morris2019weisfeiler} and prove that $d$-layer GNNs (with an additional node-level network) can be programmed to output any value on any $d$-pattern independently. Conversely, as shown in \cite{morris2019weisfeiler}, GNNs output a constant value when given nodes with the same $d$-pattern, meaning that the expressive power of GNNs is limited by their values on $d$-patterns. \textbf{(ii) $d$-pattern discrepancy implies the existence {of} bad global minima.} In Section \ref{sec:bad_global}, we focus on the case where graphs in the test distribution contain $d$-patterns that are not present in the train distribution. In that case, we prove that for any graph task solvable by a GNN, there is a weight assignment that succeeds on training distribution and fails on the test data. In particular, when the training data contains small graphs and the test data contains large graphs, if there is a $d$-pattern discrepancy between large and small graphs, then there are "bad" global minima that fail to generalize to larger graphs. \textbf{(iii) GNNs converge to non-generalizing solutions.} In Section \ref{sec: size gen problem empirical validation} we complement these theoretical results with a controlled empirical study that investigates the generalization capabilities of the solutions that GNNs converge to. We show, for several synthetic graph distributions in which we have control over the graph structure, that the generalization of GNNs in practice is correlated with the discrepancy between the local distributions of large and small graphs. Specifically, {when} the $d$-patterns in large graphs are not found in small graphs, GNNs tend to converge to a global minimum that succeeds on small graphs and fail on large graphs. This happens even if there is a "good" global minimum that solves the task for all graph sizes. This phenomenon is also prevalent in real datasets as we show in Section \ref{sec:improve size gen}. \textbf{(iv) Size generalization can be improved.} Lastly, In Section \ref{sec:improve size gen}, we discuss two approaches for improving size generalization, motivated by our findings. We first formulate the learning problem as a domain adaptation (DA) problem where the source domain consists of small graphs and the target domain consists of large graphs. We then suggest two learning setups: (1) Training GNNs on a novel self-supervised task aimed at learning meaningful representations for $d$-patterns from both the target and source domains. (2) A semi-supervised learning setup with a limited number of labeled examples from the target domain. We show that both setups are useful in a series of experiments on synthetic and real data. Notably, training with our new SSL task increases classification accuracy on large graphs in real datasets. \section{GNNs and local graph patterns}\label{sec:local graph patterns} We wish to understand theoretically the conditions where a GNN trained on graphs with a small number of nodes can generalize to graphs with a large number of nodes. To answer this question, we first analyze what information is available to each node after a graph is processed by a $d$-layer GNN. It is easy to see that every node can receive information from its neighbors which are at most $d$ hops away. We note, however, that nodes do not have full information about their $d$-hop neighborhood. For example, GNNs cannot determine if a triangle is present in a neighborhood of a given node~\cite{chen2020can}. To characterize the information that can be found in each node after a $d$-layer GNN, we introduce the notion of $d$-patterns, motivated by the structure of the node descriptors used in the Weisfeiler-Lehman test \cite{weisfeiler1968reduction}: a graph isomorphism test which was recently shown to have the same representational power as GNNs (\cite{xu2018powerful, morris2019weisfeiler}). \begin{definition}[$d$-patterns] Let $C$ be a finite set of node features, and let $G=(V,E)$ be a graph with node feature $c_v\in C$ for every node $v\in V$. We define the \textbf{d-pattern} of a node $v\in V$ for $d\geq 0$ recursively: For $d=0$, the $0$-pattern is $c_v$. For $d>0$, the $d$-pattern of $v$ is $p=(p_v,\{(p_{i_1}, m_{p_{i_1}}), \dots, (p_{i_\ell}, m_{p_{i_\ell}})\})$ iff node $v$ has $(d-1)$-pattern $p_v$ and for every $j\in \{1,\dots,\ell\}$ the number of neighbors of $v$ with $(d-1)$-pattern $p_{i_j}$ is exactly $m_{p_{i_j}}$. Here, $\ell$ is the number of distinct neighboring $d-1$ patterns of $v$. \end{definition} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{0-1-2-patterns.pdf} \caption{\textbf{Top:} A graph with 4 nodes. Each color represent a different feature. \textbf{Bottom:} The 0,1 and 2-patterns of the black node.} \label{fig:0-1-2-patterns} \end{figure} In other words, the $d$-pattern of a node is an encoding of the $(d-1)$-patterns of itself and its neighbors. For example, assume all the nodes in the graphs start with the same node feature. The $1$-pattern of each node is its degree. The $2$-pattern of each node is for each possible degree $i\in \mathbb{N}$ the number of neighbors with degree $i$, concatenated with the degree of the current node. In the same manner, the $3$-pattern of a node is for each possible $2$-pattern, the number of its neighbors with this exact $2$-pattern. \revision{ \figref{fig:0-1-2-patterns}. illustrates $0,1$ and $2$-patterns for a graph with three categorical node features, represented by three colors (yellow, grey, and black). For this case, which generalizes the uniform node feature case discussed above, the 0-pattern is the node's categorical feature; 1-patterns count the number of neighbors with a particular feature. The same definition applies to high-order patterns.} We claim that the definition of $d$-patterns gives an exact characterization to the potential knowledge that a $d$-layer GNN has on each node. First, Theorem \ref{thm:d-patterns constant} is a restatement of Theorem 1 in \cite{morris2019weisfeiler} in terms of $d$-patterns: \begin{theorem}\label{thm:d-patterns constant} Any function that can be represented by a $d$-layer GNN is constant on nodes with the same $d$-patterns. \end{theorem} The theorem states that any $d$-layer GNN will output the same result for nodes with the same $d$-pattern. Thus, we can refer to the output of a GNN on the $d$-patterns themselves. We stress that these $d$-patterns only contain a part of the information regarding the $d$-neighborhood ($d$ hops away from the node), and different neighborhoods could have the same $d$-patterns. The full proof can be found in \appref{appen:proofs from theoretical results} and follows directly from the analogy between the iterations of the WL algorithm and $d$-patterns. Next, the following theorem shows that given a set of $d$-patterns and the desired output for each such pattern, there is an assignment of weights to a GNN with $d+2$ layers that perfectly fits the output for each pattern. \begin{theorem}\label{thm:overfit} Let $C$ be a finite set of node features, $P$ be a finite set of $d$-patterns on graphs with maximal degree $N\in\mathbb{N}$, and for each pattern $p\in P$ let $y_p\in\mathbb{R}$ be some target label. Then there exists a GNN with $d+2$ layers, width bounded by $\max\left\{(N+1)^d \cdot |C|, 2\sqrt{|P|} \right\}$ and ReLU activation such that for every graph $G$ with nodes $v_1,\dots,v_n$ and corresponding $d$-patterns $p_1,\dots,p_n \subseteq P$, the output of this GNN on $v_i$ is exactly $y_{p_i}$. \end{theorem} The full proof is in \appref{appen:proofs from theoretical results}. This theorem strengthens Theorem 2 from \cite{morris2019weisfeiler} in two ways: (1) We prove that one can specify the output for every $d$-pattern while \cite{morris2019weisfeiler} show that there is a $d$-layer GNN that can distinguish all $d$-patterns; (2) Our network construction is more efficient in terms of width and dependence on the number of $d$-patterns ($2\sqrt{|P|}$ instead of $|P|$). We note that the width of the required GNN from the theorem is not very large if $d$ is small, where $d$ represents the depth of the GNN. In practice, shallow GNNs are very commonly used and are empirically successful. The $d+2$ layers in the theorem can be split into $d$ message-passing layers plus $2$ fully connected layers that are applied to each node independently. \thmref{thm:overfit} can be readily extended to a vector output for each $d$-pattern, at the cost of increasing the width of the layers. Combining \thmref{thm:d-patterns constant} and \thmref{thm:overfit} shows that we can independently control the values of $d$-layer GNNs on the set of $d$-patterns (possibly with an additional node-wise function) and these values completely determine the GNN's output. \section{"Bad" global minima exist}\label{sec:bad_global} We now consider \emph{any} graph-prediction task solvable by a $d$-layer GNN. Assume we have a training distribution of (say, small) graphs and a possibly different test distribution of (say, large) graphs. We show that if the graphs in the test distribution introduce unseen $d$-patterns, then there exists a $(d+3)$-layer GNN that solves the task on the train distribution and fails on the test distribution. We will consider both graph-level tasks (i.e. predicting a single value for the entire graph, e.g., graph classification) and node-level tasks (i.e. predicting a single value for each node, e.g., node classification). \begin{theorem}\label{thm:graph tasks} Let $P_1$ and $P_2$ be finitely supported distributions of graphs. Let $P^d_1$ be the distribution of $d$-patterns over $P_1$ and similarly $P^d_2$ for $P_2$. Assume that any graph in $P_2$ contains a node with a $d$-pattern in $P^{d}_2\setminus P^{d}_1$. Then, for any graph regression task solvable by a GNN with depth $d$ there exists a GNN with depth at most $d + 3$ that perfectly solves the task on $P_1$ and predicts an answer with arbitrarily large error on all graphs from $P_2$. \end{theorem} The proof directly uses the construction from \thmref{thm:overfit}, and can be found in \appref{appen:proof from sec corollaries}. The main idea is to leverage the unseen $d$-patterns from $P_2^d$ to change the output on graphs from $P_2$. As an example, consider the task of counting the number of edges in the graph. In this case, there is a simple GNN that generalizes to all graph sizes: the GNN first calculates the node degree for each node using the first message-passing layer and then uses the readout function to sum the node outputs. This results in the output $2|E|$, which can be scaled appropriately. To define a network that outputs wrong answers on large graphs under our assumptions, we can use \thmref{thm:overfit} and make sure that the network outputs the node degree on patterns in $P^{d}_1$ and some other value on patterns in $P^{d}_2\setminus P^{d}_1$. Note that although we only showed in \thmref{thm:overfit} that the output of GNNs can be chosen for nodes, the value of GNNs on the nodes has a direct effect on graph-level tasks. This happens because of the global readout function used in GNNs, which aggregates the GNN output over all the nodes. Next, we prove a similar theorem for node tasks. Here, we show a relation between the discrepancy of $d$-pattern distributions and the error on the large graphs. \begin{theorem}\label{thm:overfit size node} Let $P_1$ and $P_2$ be finitely supported distributions on graphs, and let $P^{d}_1$ be the distribution of $d$-patterns over $P_1$ and similarly $P^{d}_2$ for $P_2$. For any node prediction task which is solvable by a GNN with depth $d$ and $\epsilon>0$ there exists a GNN with depth at most $d + 2$ that has 0-1 loss (averaged over the nodes) smaller then $\epsilon$ on $P_1$ and 0-1 loss $\Delta(\epsilon)$ on $P_2$, where $ \Delta(\epsilon)=\max_{A:P^{d}_1(A)<\epsilon}P^{d}_2(A).$ Here, $A$ is a set of $d$-patterns, and $P(A)$ is the total probability mass for that set under $P$. \end{theorem} This theorem shows that for node prediction tasks, if there is a large discrepancy between the graph distributions (a set of $d$-patterns with small probability in $P_1^d$ and large probability in $P_2^d$), then there is a solution that solves the task on $P_1$, and generalizes badly for $P_2$. The full proof can be found in \appref{appen:proof from sec corollaries}. \textbf{Examples.} The above results show that even for simple tasks, GNNs may fail to generalize to unseen sizes, here are two examples. (i) Consider the task of counting the number of edges in a graph. From \thmref{thm:graph tasks} there is a GNN that successfully outputs the number of edges in graphs with max degree up to $N$, and fails on graphs with larger max degrees. (ii) Consider some node regression task, when the training set consists of graphs sampled i.i.d from an Erd\H{o}s-R\'enyi graph $G(n,p)$, and the test set contains graphs sampled i.i.d from $G(2n,p)$. In this case, a GNN trained on the training set will be trained on graphs with an average degree $np$, while the test set contains graphs with an average degree $2np$. When $n$ is large, with a very high probability, the train and test set will not have any common $d$-patterns, for any $d>0$. Hence, by \thmref{thm:overfit size node} there is a GNN that solves the task for small graphs and fails on large graphs. The next section studies the relation between size generalization and local graph structure in controlled experimental settings on synthetic data. \section{A controlled empirical study}\label{sec: size gen problem empirical validation} The previous section showed that there exist bad global minima that fail to generalize to larger graphs. In this section, we study empirically whether common training procedures lead to bad global minima in practice. Specifically, we demonstrate on several synthetic graph distributions, that reaching bad global minima is tightly connected to the discrepancy of $d$-pattern distributions between large and small graphs. We identify two main phenomena: \textbf{(A)} When there is a large discrepancy between the $d$-pattern distributions of large and small graphs, GNNs fail to generalize; \textbf{(B)} As the discrepancy between these distributions gets smaller, GNNs get better at generalizing to larger graphs. \textbf{Tasks.} In the following experiments, we use a controlled regression task in a student-teacher setting. In this setting, we sample a ``teacher" GNN with random weights (drawn i.i.d from $U([-0.1,0.1]))$, freeze the network, and label each graph in the dataset using the output of the ``teacher" network. Our goal is to train a ``student" network, which has the same architecture as the ``teacher" network, to fit the labels of the teacher network. The advantages of this setting are twofold: (1) \emph{A solution is guaranteed to exist}: We know that there is a weight assignment of the student network which perfectly solves the task for graphs of any size. (2) \emph{Generality}: It covers a diverse set of tasks solvable by GNNs. As the evaluation criterion, we use the squared loss. \begin{figure*}[ht] \centering \begin{tabular}{cccc} \includegraphics[width=0.23\textwidth]{vary_test_size.png}& \includegraphics[width=0.23\textwidth]{vary_test_size_normalize.png}& \includegraphics[width=0.23\textwidth]{vary_train_size.png}& \includegraphics[width=0.23\textwidth]{vary_test_p.png} \\ (a)&(b)&(c)&(d) \end{tabular} \caption{ The effect of graph size and $d$-pattern distribution on generalization in $G(n,p)$ graphs in a student-teacher graph regression task. The $y$-axis represents the squared loss in $\log_{10}$ scale. (a) Bounded training size $n\in [40,50]$ and varying test size with constant $p=0.3$ (b) \revision{Bounded training training size $n\in [40,50]$ and varying test size {while keeping node degrees constant by changing} $p\in[0.15,0.3]$} . (c) Varying train size with constant test size. We train on graphs with $n$ nodes and constant $p=0.3$. Here, $n$ is drawn uniformly from $[40,x]$ and $x$ varies; test on $n=150$, $p=0.3$. (d) Train on $n$ drawn uniformly from $[40,50]$ and $p=0.3$ test on $n=100$ and varying $p$. See discussion in the text.} \label{fig:validation_plots} \end{figure*} \textbf{Graph distribution.} Graphs were drawn from a $G(n,p)$ distribution. This distribution is useful for testing our hypothesis since we can modify the distribution of $d$-patterns simply by changing either $p$ or $n$. For example, 1-patterns represent node degrees, and in this model, the average degree of graphs generated from $G(n,p)$ is $np$. We provide experiments on additional graph distributions like PA in Appendix \ref{append:addtional experiments size gen problem}. \textbf{Architecture and training protocol.} We use a GNN as defined in \citep{morris2019weisfeiler} with ReLU activations. The number of GNN layers in the network we use is either $1,2$ or $3$; the width of the teacher network is $32$ and of the student network $64$, providing more expressive power to the student network. We obtained similar results when testing with a width of 32, the same as the teacher network. We use a summation readout function followed by a two-layer fully connected suffix. We use ADAM with a learning rate of $10^{-3}$. We added weight decay ($L_2$ regularization) with $\lambda = 0.1$. We performed a hyper-parameters search on the learning rate and weight decay and use validation-based early stopping on the source domain (small graphs). The results are averaged over 10 random seeds. We used Pytorch Geometric \citep{fey2019fast} on NVIDIA DGX-1. \textbf{Experiments} We conducted four experiments, shown in Figure \ref{fig:validation_plots} (a-d). We note that in all the experiments, the loss on the validation set was effectively zero. First, we study the generalization of GNNs by training on a bounded size range $n\in [40,50]$ and varying the test size in $[50,150]$. Figure \ref{fig:validation_plots} (a) shows that when $p$ is kept constant while increasing the test graph sizes, size generalization degrades. Indeed, in this case, the underlying $d$-pattern distribution diverges from the training distribution. \revision{In \appref{append:addtional experiments size gen problem} we demonstrate that this problem persists to larger graphs with up to 500 nodes.} On the flip side, Figure \ref{fig:validation_plots} (b) shows that when $p$ is properly normalized to keep the degree $np$ constant while varying the graph size then we have significantly better generalization to large graphs. In this case, the $d$-pattern distribution remains similar. In the next experiment, shown in Figure \ref{fig:validation_plots} (c) we keep the test size constant $n=150$ and vary the training size $n\in[40,x]$ where $x$ varies in $[50,150]$ and $p=0.3$ remains constant. In this case we can see that as we train on graph sizes that approach the test graph sizes, the $d$-pattern discrepancy reduces and generalization improves. In our last experiment shown in Figure \ref{fig:validation_plots} (d), we train on $n\in[40,50]$ and $p=0.3$ and test on $G(n,p)$ graphs with $n=100$ and $p$ varying from $0.05$ to $0.5$. As mentioned before, the expected node degree of the graphs is $np$, hence the distribution of $d$-patterns is most similar to the one observed in the training set when $p=0.15$. Indeed, this is the value of $p$ where the test loss is minimized. \textbf{Conclusions.} First, our experiments confirm phenomena \textbf{(A-B)}. Another conclusion is that size generalization is more difficult when using deeper networks. This is consistent with our theory since in these cases the pattern discrepancy becomes more severe: for example, $2$-patterns divide nodes into significantly more $d$-pattern classes than $1$-patterns. Further results on real datasets appear in \secref{sec:improve size gen}. \textbf{Additional experiments.} in \appref{append:addtional experiments size gen problem}, We show that the conclusions above are consistent along different tasks (max clique, edge count, node regression), distributions (PA and point cloud graphs), and architectures (GIN \citep{xu2018powerful}). We also tried other activation functions (tanh and sigmoid). \revision{Additionally, we experimented with generalization from large to small graphs. Our previous understanding is confirmed by the findings of the present experiment: generalization is better when the training and test sets have similar graph sizes (and similar $d$-pattern distribution).} \section{Towards improving size generalization}\label{sec:improve size gen} The results from the previous sections imply that the problem of size generalization is not only related to the size of the graph in terms of the number of nodes or edges but to the distribution of $d$-patterns. Based on this observation, we now formulate the size-generalization problem as a domain adaptation (DA) problem. We consider a setting where we are given two distributions over graphs: a source distribution $\mathcal{D}_S${} (say, for small graphs) and a target distribution $\mathcal{D}_T${} (say, for large graphs). The main idea is to adapt the network to unseen $d$-patterns appearing in large graphs. We first consider the \emph{unsupervised} DA setting, where we have access to labeled samples from the source $\mathcal{D}_S${} but the target data from $\mathcal{D}_T${} is unlabeled. Our goal is to infer labels on a test dataset sampled from the target $\mathcal{D}_T${}. To this end, we devise a novel SSL task that promotes learning informative representations of unseen $d$-patterns. We show that this approach improves the size-generalization ability of GNNs. Second, we consider a \emph{semi-supervised} setup, where we also have access to a small number (e.g., 1-10) of labeled examples from the target $\mathcal{D}_T${}. We show that such a setup, when feasible, can lead to equivalent improvement, and benefits from our SSL task as well. \subsection{SSL for DA on graphs} In SSL for DA, a model is trained on unlabeled data to learn a \emph{pretext} task, which is different from the main task at hand. If the pretext task is chosen wisely, the model learns useful representations \citep{doersch2015unsupervised,gidaris2018unsupervised} that can help with the main task. Here, we train the pretext task on both the source and target domains, as was done for images and point clouds \citep{sun2019unsupervised,achituve2020self}. The idea is that the pretext task aligns the representations of the source and target domains leading to better predictions of the main task for target graphs. \textbf{Pattern-tree pretext task.} We propose a novel pretext task which is motivated by sections \ref{sec:bad_global}-\ref{sec: size gen problem empirical validation}: one of the main causes for bad generalization is unseen $d$-patterns in the test set. Therefore, we design a pretext task to encourage the network to learn useful representations for these $d$-patterns. \begin{wrapfigure}[17]{r}{0.25\textwidth} \centering \includegraphics[width=0.25\textwidth]{graph_tree_and_count.pdf} \caption{\textbf{Top left:} a graph with node features represented by colors. \textbf{Top right:} A tree that represents the $d$-patterns for the black node. \textbf{Bottom: }The tree descriptor is a vector with each coordinate containing the number of nodes from each class in each layer of the tree. \label{fig:pattern tree}} \end{wrapfigure} Our pretext task is a node prediction task in which the output node label is specifically designed to hold important information about the node's $d$-pattern. For an illustration of a label see Figure \ref{fig:pattern tree}. The construction of those labels is split into two procedures. First, we construct a tree that fully represents each node's $d$-pattern. The tree is constructed for a node $v$ in the following way: we start by creating a root node that represents $v$. We then create nodes for all $v$'s neighbors and connect them to the root. All these nodes hold the features of the nodes they represent in the original graph. We continue to grow the tree recursively up to depth $d$ by adding new nodes that represent the neighbors (in the original graph) of the current leaves in our tree. This is a standard construction, see e.g., \cite{xu2018powerful}. For more details about the construction of the pattern tree see \appref{appen:SSL_tree_task}. We then calculate a descriptor of the tree that will be used as the SSL output label for each node. The descriptor is a concatenation of histograms of the different node features in each layer of the tree. The network is then trained in a node regression setup with a dedicated SSL head to predict this descriptor. \subsection{Experiments} \begin{table*}[t] \setlength{\tabcolsep}{3.5pt} \centering \begin{sc} \scriptsize \begin{tabular}{ l c c c c c c c c } \textbf{Datasets} & \textbf{Deezer} & \textbf{IMDB - B} & \textbf{NCI1} & \textbf{NCI109} & \textbf{Proteins} & \textbf{Twitch} & \textbf{DD} & \textbf{Average} \\ \hline \textbf{Total-var. distance} & 1 & 0.99 & 0.16 & 0.16 & 0.48 & 1 & 0.15 & - \\ \hline \textbf{Small Graphs} & $56.5 \pm 0.8$ & $63.2 \pm 3.3$ & $75.5 \pm 1.6$ & $78.4 \pm 1.4$ & $75.4 \pm 3.1$ & $69.7 \pm 0.2$ & $71.1 \pm 4.4 $ & 70.0\% \\ \hline \textbf{Vanilla} & $41.1 \pm 6.8$ & $55.9 \pm 7.8$ & $65.9 \pm 4.3$ & $68.9 \pm 3.8$ & $76.0 \pm 8.5$ & $60.5 \pm 3.6$ & $76.3 \pm 3.2 $ & 63.5\% \\ \hline \textbf{Homo-GNN} & $40.5 \pm 6.6$ & $56.3 \pm 7.0$ & $66.0 \pm 3.7$ & $68.8 \pm 3.2$ & $77.1 \pm 10.0$ & $60.8 \pm 2.3$ & \pmb{$76.8 \pm 3.0$} & 63.8\% \\ \textbf{NM MTL} & \pmb{$51.6 \pm 8.5$} & $55.6 \pm 6.8$ & $49.9 \pm 7.8$ & $61.7 \pm 5.7$ & $78.8 \pm 8.4$ & $49.5 \pm 2.8$ & $67.4 \pm 5.4$ & 59.2\% \\ \textbf{NM PT} & $50.1 \pm 7.5$ & $54.9 \pm 6.7$ & $51.7 \pm 6.6$ & $55.8 \pm 5.0$ & $78.2 \pm 8.2$ & $48.4 \pm 4.0$ & $60.3 \pm 15.9$ & 57.1\% \\ \textbf{GAE MTL} & $49.4 \pm 11.0$ & $55.5 \pm 6.0$ & $51.2 \pm 9.9$ & $57.6 \pm 9.4$ & $79.5 \pm 11.7$ & $62.5 \pm 5.1$ & $67.8 \pm 10.0$ & 60.5\% \\ \textbf{GAE PT} & $47.1 \pm 10.0$ & $54.1 \pm 6.8$ & $58.9 \pm 7.6$ & $67.2 \pm 5.6$ & $70.5 \pm 9.4$ & $53.6 \pm 4.7$ & $69 \pm 7.1$ & 60.1\% \\ \textbf{NML MTL}& $46.4 \pm 9.5$ & $54.4 \pm 7.0$ & $52.3 \pm 6.3$ & $56.2 \pm 6.5$ & $78.7 \pm 6.8$ & $57.4 \pm 4.1$ & $64.7 \pm 11.9$ & 58.6\% \\ \textbf{NML PT} & $48.4 \pm 10.7$ & $53.8 \pm 6.1$ & $54.6 \pm 6.2$ & $56.1 \pm 8.1$ & $76.3 \pm 8.0$ & $54.9 \pm 4.7$ & $61.4 \pm 15.1$ & 57.9\% \\ \textbf{CL MTL} & $48.2 \pm 10.9$ & $54.6 \pm 6.6$ & $52.2 \pm 6.8$ & $55.7 \pm 5.8$ & $76.6 \pm 7.7$ & $59.4 \pm 3.5$ & $63.6 \pm 15.0$ & $58.6\%$ \\ \textbf{CL PT} & $47.6 \pm 9.7$ & $53.6 \pm 7.5$ & $57.4 \pm 8.1$ & $57.3 \pm 6.1$ & $77.6 \pm 4.7$ & $53.9 \pm 7.1$ & $69.2 \pm 5.5$ & $59.5\%$ \\ \hline \textbf{Pattern MTL (ours)} & $45.6 \pm 8.8$ & $56.8 \pm 9.2$ & $60.5 \pm 7.5$ & $67.9 \pm 7.2$ & $75.8 \pm 11.1$ & $61.6 \pm 3.5$ & \pmb{$76.8 \pm 3.0$} & 63.6\% \\ \textbf{Pattern PT (ours)} & $44.0 \pm 7.7$ & \pmb{$61.9 \pm 3.2$} & \pmb{$67.8 \pm 11.7$} & \pmb{$74.8 \pm 5.7$} & \pmb{$84.7 \pm 5.1$} & \pmb{$64.5 \pm 3.3$} & $74.9 \pm 5.2$ & \textbf{67.5\%} \\ \hline \end{tabular} \end{sc} \caption{Test accuracy of compared methods in 7 binary classification tasks. The Pattern tree method with pretraining achieves the highest accuracy in most tasks and increases the average accuracy from 63\% to 67\% compared with the second-best method. High variance is due to the domain shift between the source and target domain. } \label{tab:real datasets} \end{table*} \textbf{Baselines.} We compare our new pretext task to the following baselines: (1) \textbf{Vanilla}: standard training on the source domain; (2) \textbf{HomoGNN} \citep{tang2020towards} a homogeneous GNN without the bias term trained on the source domain; (3) \textbf{Graph autoencoder (GAE)} pretext task \citep{kipf2016variational}; (4) \textbf{Node masking (NM)} pretext task from \cite{hu2019strategies} where at each training iteration we mask $10\%$ of the node features and the goal is to reconstruct them. In case the graph does not have node features then the task was to predict the degree of the masked nodes. (5) \textbf{Node metric learning (NML)}: we use metric learning to learn useful node representations. We use a corruption function that given a graph and corruption parameter $p\in[0,1]$, replaces $p|E|$ of the edges with random edges, and thus can generate positive ($p=0.1$) and negative ($p=0.3$) examples for all nodes of the graph. We train with the triplet loss \citep{weinberger2009distance}. \revision{(6) \textbf{Contrastive learning (CL)}: In each iteration, we obtain two similar versions of each graph, which are used to compute a contrastive loss \cite{qiu2020gcc,you2020graph} against other graphs. We follow the protocol of \cite{you2020graph}, using a corruption function of edge perturbation that randomly adds and removes $5\%$ of the edges in the graph.} \textbf{Datasets.} We use datasets from \cite{Morris+2020} and \cite{karateclub} (Twitch egos and Deezer egos). We selected datasets that have a sufficient number of graphs (more than 1,000) and with a non-trivial split to small and large graphs as detailed in \appref{appen:datasets statistics}. In total, we used 7 datasets, 4 from molecular biology (NCI1, NCI109, D\&D, Proteins), and $3$ from social networks (Twitch ego nets, Deezer ego nets, IMDB-Binary). In all datasets, $50\%$ smallest graphs were assigned to the training set, and the largest $10\%$ of graphs were assigned to the test set. We further split a random $10\%$ of the small graphs as a validation set. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{bar_plot_response.png} \caption{ \revision{Average {accuracy} on different size splits in the unsupervised setup for (i) $d$-pattern pretraining and (ii) no SSL (Vanilla). {Accuracy is} averaged over all the datasets in table \ref{tab:real datasets}. }} \label{fig:diff_splits} \end{figure} \textbf{Architecture and training protocol.} The setup is the same as in \secref{sec: size gen problem empirical validation} with a three-layer GNN in all experiments. Given a pretext task we consider two different training procedures: (1) \textbf{Multi-task learning (MTL)} \citep{you2020does}; (2) \textbf{Pretraining (PT)} \citep{hu2019strategies}. For MTL we use equal weights for the main and SSL tasks. In the semi-supervised setup, we used equal weights for the source and target data. More details on the training procedures and the losses can be found in \appref{appen:training procedure}. \textbf{$d$-pattern distribution in real datasets.} In \appref{appen:counting patterns} we study the discrepancy between the local patterns between small and large graphs on all the datasets mentioned above. The second row of Table \ref{tab:real datasets} summarizes our findings with the total variation ($TV$) distances between $d$-pattern distributions of small and large graphs. The difference between these distributions is severe for all social network datasets ($TV\approx 1$), and milder for biological datasets ($TV \in [0.15,0.48]$). Next, we will see that a discrepancy between the $d$-patterns leads to bad generalization and that correctly representing the patterns of the test set improves performance. \textbf{Results for unsupervised DA setup.} Table \ref{tab:real datasets} compares the effect of using the Pattern-tree pretext task to the baselines described above. The \emph{small graphs} row presents vanilla results on a validation set with small graphs for comparison. The small graph accuracy on 5 out of 7 datasets is larger by 7.3\%-15.5\% than on large graphs, indicating that the size-generalization problem is indeed prevalent in real datasets. Pretraining with the $d$-patterns pretext task outperforms other baselines in 5 out 7 datasets, with an average $4\%$ improved accuracy on all datasets. HOMO-GNN slightly improves over the vanilla while other pretext tasks do not improve average accuracy. Specifically, for the datasets with high discrepancy of local patterns (namely, IMDB, Deezer, Proteins, and Twitch), pretraining with our SSL task improves nicely over vanilla training (by $5.4\%$ on average). Naturally, the accuracy here is lower than SOTA on these datasets because the domain shift makes the problem harder. \revision{\figref{fig:diff_splits} {shows two additional experiments, conducted on all datasets using different size splits. First, using a gap of 65\% (training on the 30\% smallest graphs and testing on the 5\% largest graphs), and second, using} a gap of 10\% (training on the 50\% smallest graphs and testing on graphs in the 60-70-percentile). The results are as expected: (1) When training without SSL, larger size gaps hurt more (2) SSL improves over Vanilla training with larger gaps.} \textbf{Results for semi-supervised DA setup.} Figure \ref{fig:few_shot_vanilla_vs_pattern} compares the performance of vanilla training versus pretraining with the pattern-tree pretext task in the semi-supervised setup. As expected, the accuracy monotonically increases with respect to the number of labeled examples in both cases. Still, we would like to highlight the improvement we get by training on only a handful of extra examples. Pretraining with the pretext task yields better results in the case of 0,1,5 labeled examples and comparable results with 10 labeled examples. \textbf{Additional experiments} We provide additional experiments on the synthetic tasks discussed in \secref{sec: size gen problem empirical validation} in Appendix \ref{sec:more experiments from sec solution}. We show that the pattern-tree pretext task improves generalization in the student-teacher setting (while not solving the edge count or degree prediction tasks). In addition, adding even a single labeled sample from the target distribution significantly improves performance. \revision{We additionally tested our SSL task on a combinatorial optimization problem of finding the max clique size in the graph, our SSL improves over vanilla training by a factor of 2, although not completely solving the problem. Also, we tested on several tasks from the "ogbg-molpcba" dataset (see \cite{hu2020open}), although the results are inconclusive. This is further discussed in \secref{sec:discussion}.} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{barplot.png} \caption{Average classification results in the semi-supervised setup for (i) $d$-pattern pretraining and (ii) no SSL (Vanilla). Results were averaged over all the datasets in table \ref{tab:real datasets}. } \label{fig:few_shot_vanilla_vs_pattern} \end{figure} \section{Related work} \textbf{Size generalization.} Several papers observed successful generalization across graph sizes, but the underlying reasons were not investigated~\citep{li2018combinatorial, maron2018invariant, luz2020learning}. More recently, \citep{velivckovic2019neural} showed that when training GNNs to perform simple graph algorithms step by step they generalize better to graphs of different sizes. Unfortunately, such training procedures cannot be easily applied to general tasks. \cite{knyazev2019understanding} studied the relationship between generalization and attention mechanisms. \cite{bevilacqua2021on} study graph extrapolation using causal modeling. On the more practical side, \cite{joshi2019efficient, joshi2020learning, khalil2017learning}, study the Traveling Salesman Problem (TSP), and show empirically that size generalization on this problem is hard. \cite{corso2020principal} study several multitask learning problems on graphs and evaluate how the performance changes as the size of the graphs change. In another line of work, \citet{tang2020towards, nachmani2020molecule} considered adaptive depth GNNs. In our paper, we focus on the predominant GNN architecture with a fixed number of message-passing layers. Several works also studied size generalization and expressivity when learning set-structured inputs \citep{zweig2020functional, bueno2020limitations}. In \cite{santoro2018measuring} the authors study generalization in abstract reasoning. \textbf{Generalization in graph neural networks.} Several works studied generalization bounds for certain classes of GNNs \citep{garg2020generalization, puny2020graph,verma2019stability, liao2020pac, du2019graph}, but did not discuss size generalization. \cite{sinha2020evaluating} proposed a benchmark for assessing the logical generalization abilities of GNNs. \textbf{self-supervised and unsupervised learning on graphs.} One of the first papers to propose an unsupervised learning approach for graphs is \cite{kipf2016variational}, which resulted in several subsequent works \citep{park2019symmetric, salha2019keep}. \citep{velickovic2019deep} suggested an unsupervised learning approach based on predicting global graph properties from local node descriptors. \citep{hu2019strategies} suggested several unsupervised learning tasks that can be used for pretraining. More recently, \citep{jin2020self, you2020does} proposed several self-supervised tasks on graphs, such as node masking. These works mainly focused on a single graph learning setup. \revision{ \cite{you2020graph,qiu2020gcc} applied contrastive learning techniques for unsupervised representation learning on graphs. The main difference between our SSL task and contrastive learning is that following our theoretical observation, our SSL task focuses on representing the local structure of each node, rather than a representation that takes into account the entire graph. } \section{Conclusion and Discussion}\label{sec:discussion} This work is a step towards gaining an understanding of the size-generalization problem in graph neural networks. We showed that for important graph distributions, GNNs do not naturally generalize to larger graphs even on simple tasks. We started by defining $d$-patterns, a concept that captures the expressivity of GNNs. We then characterized how the failure to generalize depends on $d$-patterns. Lastly, we suggested two approaches that can improve generalization. Although these approaches are shown to be useful for multiple tasks, there are still some tasks where generalization could not be improved. A limitation of our approach is that it assumes categorical node features and bidirectional edges with no features. We plan to expand our approach in the future to address these important use cases. As a final note, our characterization of $d$-patterns, as well as the methods we proposed, can be applied to other cases where generalization is hindered by distribution shifts, and may also be able to improve results in these situations.
proofpile-arXiv_059-15753
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Various approaches have been examined for reducing the input complexity of data to be processed by multilayer learning architectures. This is particularly of interest for the processing of large, high-resolution images by neural-inspired networks. Specifically, there is need to reduce the effective pixel-complexity of large high-definition and ultra-high definition images to permit practical training on large image datasets. The simplest approach for reducing pixel complexity is to perform simple tile-based decimation, e.g., perform bicubic scaling to a lower resolution. A more sophisticated approach involves performing a non-regular decimation into non-rectangular tiles, referred to as {\em superpixels} \cite{RenMalik}, that are constrained to conform to salient structures of the image. Figures 1 and 2 provide representative examples\footnote{Superpixel tessellations depicted in figures were computed using the online segmentation tool of \cite{slic}.}. The efficacy of superpixel methods has been demonstrated in a variety of applications, but various limitations have also been identified \cite{yang}. \begin{figure} \begin{center} \includegraphics[width=\linewidth,keepaspectratio]{NakaFig.png} \caption{\footnotesize Example of an image of an exterior location ({\em top}) and its decomposition into superpixels ({\em bottom}). Note the arbitrary superpixel boundaries in relatively uniform regions of the sky and pavement. } \label{fig:p-0} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\linewidth,keepaspectratio]{RoosterFig.png} \caption{\footnotesize Example of an image of an interior location ({\em left}) and its decomposition into superpixels ({\em right}). Note the arbitrary superpixel boundaries in relatively uniform regions of the shower and rug.} \end{center} \end{figure} In this paper we briefly introduce an alternative approach, based on the notion of {\em covapixels} for the compressed represention of images and other forms of large structured pieces of information. In the following section we define an example of a covapixel representation in which the analog of a superpixel may take the form of a pair $(\mbox{$\vec{\mu}$},\mbox{${\bf C}$})$, where $\mbox{$\vec{\mu}$}$ is a $k \times 1$ vector and $\mbox{${\bf C}$}$ is $k\times k$ symmetric (hermitian) nonnegative-definite matrix\footnote{This matrix may generally be represented in a variety of simplified or compressed forms, e.g., as the triangular Cholesky square root.}. \section{Covapixels} The motivation underpinning use of superpixels is the desire to obtain the data-reduction advantages of tile-based resolution reduction while minimizing the potential loss of important detail information. Unfortunately, the irregular boundaries of superpixels can have the effect of introducing spurious detail information. More specifically, the imposition of an artificial size or area constraint on superpixels will tend to introduce relatively complex artificial segmentation boundaries within large homogeneous areas of a given image. In other words, spurious feature entropy is introduced into the covapixel decomposition of the image. Upon further reflection, it becomes clear that whatever generalization of a pixel is defined must somehow produce a relatively homogeneous spatial tessellation in which each generalized pixel encodes a region that is as locally uniform as possible in terms of image detail. The problem that arises with superpixels is the representational complexity of their boundaries, which then increases the input complexity of whatever system / network is expected to manipulate and process them. We proprose to address many of the limitations of previous methods for image complexity reduction by decomposing images in a way that encodes local image information in a simpler form that admits use of standard operators used in tracking and control applications. Specifically, we represent a covapixel as a vector and matrix pair that summarizes a given tile/region of an image in which, for example, the vector defines the location (and possibly other attributes) of the region while the matrix encodes some measure of the spatial extent of the region and/or the distribution associated with some measure of the content (feature attributes) of the region. The reason for adopting a {\em mean} and {\em covariance} representation is to permit the processing system/network to use data fusion operators such as the Kalman filter (KF) update (and its inverse/information form) \cite{kf}; Covariance Intersection (CI); Covariance Union (CU); Covariance Addition (CA); and their variants\footnote{See the appendices of \cite{jkucu} for a unified discussion of CI, CU, and CA.} (\cite{jkucu,gencu}) for most or all internal processing of covapixel information. More intuitively, the complex boundary of a superpixel is replaced with a mean and covariance statistical representation that can be interpreted (though necessarily so) as a Gaussian probability distribution, e.g., as informally depicted in Figure 3. \begin{figure} \begin{center} \includegraphics[width=\linewidth,keepaspectratio]{SupixCovFig.png} \caption{\footnotesize Superpixel ({\em left}) from a red-brick wall in Figure 1 and its summarizing covapixel ({\em right}). The gold circles correspond respectively ({\em l-r}) to the centroid of the superpixel and the mean vector $\mbox{$\vec{\mu}$}$ of its covapixel representation.} \end{center} \end{figure} The key feature of the mean-covariance covapixel representation is that its information is in a form that can be directly used by standard data fusion operators as depicted in Figure 4. \begin{figure} \begin{center} \includegraphics[width=\linewidth,keepaspectratio]{ScalarVsMeanCovFig.png} \caption{\footnotesize Processing of scalars derived from a superpixel ({\em left}) versus processing of mean and covariance pairs using conventional data fusion operators ({\em right}). This generalization can potentially be applied more generally to the formulation of neurons in artificial neural network-type architectures.} \end{center} \end{figure} \section{Discussion} We have briefly introduced and discussed a generalization of pixel information that can be expressed in the mean and covariance form widely used in filtering and control applications. This generalization is referred to as a {\em covapixel} because it can encode arbitrary state information in terms of a state vector and an associated error covariance matrix\footnote{It should be noted that the term {\em covariance matrix} does not necessarily denote the second central moment of a probability distribution. Specifically, it may represent an upper bound on that moment \cite{jkucu} or could be interpreted to define an elliptical/ellipsoidal bounded region/volume \cite{gencu2}.}. Although we have focused on applications to image processing, the replacing of scalars with mean and covariance pairs could even be applied to the elements of matrices and tensors, e.g., to maintain covariance estimates of numerical error that accrues from the application of linear algebra operators. This of course suggests the potential use of units of this kind in neural network-type architectures which would intrinsically process mean and covariance pairs rather than scalars. One of the key potential benefits of covapixels is their simpler parametric representation relative to the complex boundaries of superpixels. It can be anticipated that this simpler representation will reduce the amount of spurious implicit feature detail that can result from complex boundaries. The critical question that remains to be answered is whether the loss of precise boundary detail results in a loss of salient information that undermines the practical utility the approach\footnote{If this is the case, then an alternative in the opposite direction would be to represent superpixel information in the form an adjacency matrix representing a graph that may include non-rigid joints, which would incur a significant increase in complexity but would admit various tools from linear algebra \cite{mm} to be applied for the dynamic maintenance of tessellation structures, e.g., in video applications.}. \bibliographystyle{plain}
proofpile-arXiv_059-15754
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} In open-source projects, team composition and development process is transparent and traceable, which is one of the advantages of the open-source model~\cite{MidhaPalvia2012}. Understanding of patterns and characteristics of open-source projects, where---sometime many---developers with different roles~\cite{WangFengWangEtAl2020} work together, is an important research question; especially for projects with high public interests. The COVID-19 pandemic~\cite{ChenAllotLu2020} raises challenges for scientists of many disciplines. Computer scientists and software developers help to fight the pandemic with software systems, which must be developed under time pressure~\cite{CarrollConboy2020}. For example, apps for mobile devices that support \emph{contact tracing} of infected persons are useful to identify local COVID-19 hot-spots and find other persons, who are potentially infected, too. We focus on Germany's exposure notification app \textit{Corona-Warn-App} (CWA; see Section~\ref{sec:example_cwa})\@. For the CWA, we want to analyze and see visually, to what extend team members and external contributors contributed to the various sub-projects of CWA on GitHub. Our method is to record the \emph{provenance of software development processes}~\cite{MoreauGrothMilesEtAl2008,WendelKundeSchreiber2010} and store it according to a standard provenance data model. Technically, we do repository mining to extract provenance and store it as a labeled property graph in graph databases. We query the graph for information to answer our research questions directly or for parts of the graph to visualize it with ``standard'' graph drawing. We describe the contributions and the emerging results of our works as follows: \begin{itemize} \item A brief description of provenance of software development processes with focus on open-source processes that use the version control system \texttt{git} (Section~\ref{sec:provenance}). \item An overview how we draw graphs that visually show contributions by developers with different roles (Section~\ref{sec:graph_visualization}). \item As an example towards a user study, we present graph drawings for the Corona-Warn-App (Section~\ref{sec:example_cwa}). \end{itemize} \section{Provenance of Software Development Processes} \label{sec:provenance} Provenance can be expressed in many formats. We focus on the standard W3C \textsc{PROV}~\cite{MoreauGroth2013}, which defines the provenance data model \textsc{PROV-DM}~\cite{MoreauMissierBelhajjameEtAl2013}. The core structure of \textsc{PROV-DM} relies on the definition of the model class elements \emph{entities}~$\vcenter{\hbox{\includegraphics[scale=0.12]{images/prov-entity.pdf}}}$, \emph{activities}~$\vcenter{\hbox{\includegraphics[scale=0.12]{images/prov-activity.pdf}}}$, and \emph{agents}~$\vcenter{\hbox{\includegraphics[scale=0.12]{images/prov-agent.pdf}}}$~that are involved in producing a piece of data or artifact and on definitions of \emph{relations} to relate these class elements, such as \emph{wasGeneratedBy}, \emph{wasAssociatedWith}, \emph{wasAttributedTo}, and \emph{used}. Each of the class elements and relations can have additional attributes. \begin{figure}[h] \centering \includegraphics[width=0.5\columnwidth]{images/prov-overview.pdf} \caption{Overview of the PROV model: class elements \emph{entities}, \emph{activities}, and \emph{agents} with relations.} \label{fig:prov-overview} \end{figure} Provenance of an entity (e.g., a software artefact) is a \emph{directed acyclic graph} (DAG). Since all nodes and edges of this graph have a defined semantics, the provenance graph is a specific \emph{knowledge graph}. The provenance graph can be stored in graph databases as a \emph{labeled property graph}. \subsection{Provenance for \texttt{git} Repositories} To analyze software development processes, we extract \emph{retrospective provenance}~\cite{McPhillipsBowersBelhajjameEtAl2015} from repositories and store it in a graph database for further analysis (Figure~\ref{fig:repository-mining})~\cite{SchreiberBoer2020}. \begin{figure}[ht] \centering \includegraphics[width=0.8\columnwidth]{images/repository-mining.pdf} \caption{Extracting provenance from git repositories.} \label{fig:repository-mining} \end{figure} To extract provenance from \texttt{git}-based projects we use tools, which crawl the \texttt{git} repositories and additional information, such as issues or pull requests (Git2PROV~\cite{DeNiesMagliacaneVerborghEtAl2013,VerborghMagliacaneSchreiberEtAl2020} and GitHub2PROV~\cite{PackerChapmanCarr2019}). \subsection{Using and Analyzing Provenance} To analyze provenance graphs, many \emph{visual} and \emph{analytical} methods exist---including graph summarization~\cite{Moreau2015,TianHankinsPatel2008}, or visual exploration~\cite{Wattenberg2006} . For example, we illustrate querying and using the provenance graph to answer the question: ``\emph{Which files have commits by team members as well as external contributors?}'' We generate a \textsc{Cypher} query, that adds information about contributors roles. We retrieve member information via the GitHub API and store it in Python lists of team members and external contributors, which we insert in a \textsc{Cypher} template. This \textsc{Cypher} query creates new directed relations between persons~$\vcenter{\hbox{\includegraphics[scale=0.12]{images/prov-agent.pdf}}}$~ and files~$\vcenter{\hbox{\includegraphics[scale=0.12]{images/prov-entity.pdf}}}$; for example, the relation for team members is: \begin{Verbatim}[commandchars=\\\{\}] \PYG{p}{(:}\PYG{n}{Agent}\PYG{p}{)}\PYG{o}{\char`\-{}[}\PYG{p}{:}\PYG{n}{CONTRIBUTES\char`\_{}TO}\PYG{+w}{ }\PYG{p}{\char`\{{}}\PYG{n}{role}\PYG{p}{:}\PYG{+w}{ }\PYG{err}{\char`\'{}}\PYG{n}{team}\PYG{err}{\char`\'{}}\PYG{p}{\char`\}{}}\PYG{o}{]\char`\-{}\char`\>{}}\PYG{p}{(:}\PYG{n}{Entity}\PYG{p}{)} \end{Verbatim} Then we query for files, where team members and external contributor made changes at any of the files revisions (the query result is exported for visualization (see Section~\ref{sec:graph_visualization}): \begin{Verbatim}[commandchars=\\\{\}] \PYG{k}{MATCH} \PYG{+w}{ }\PYG{p}{(}\PYG{n}{team\char`\_{}member}\PYG{p}{:}\PYG{n}{Agent}\PYG{p}{)} \PYG{+w}{ }\PYG{err}{\char`\-{}}\PYG{o}{[}\PYG{n}{r1}\PYG{p}{:}\PYG{n}{CONTRIBUTES\char`\_{}TO}\PYG{+w}{ }\PYG{p}{\char`\{{}}\PYG{n}{role}\PYG{p}{:}\PYG{+w}{ }\PYG{err}{\char`\'{}}\PYG{n}{team}\PYG{err}{\char`\'{}}\PYG{p}{\char`\}{}}\PYG{o}{]} \PYG{+w}{ }\PYG{err}{\char`\-{}}\PYG{p}{\char`\>{}(}\PYG{n}{file}\PYG{p}{:}\PYG{n}{Entity}\PYG{p}{)} \PYG{+w}{ }\PYG{p}{\char`\<{}}\PYG{err}{\char`\-{}}\PYG{o}{[}\PYG{n}{r2}\PYG{p}{:}\PYG{n}{CONTRIBUTES\char`\_{}TO}\PYG{+w}{ }\PYG{p}{\char`\{{}}\PYG{n}{role}\PYG{p}{:}\PYG{+w}{ }\PYG{err}{\char`\'{}}\PYG{n}{contributor}\PYG{err}{\char`\'{}}\PYG{p}{\char`\}{}}\PYG{o}{]} \PYG{+w}{ }\PYG{err}{\char`\-{}}\PYG{p}{(}\PYG{n}{external\char`\_{}contributor}\PYG{p}{:}\PYG{n}{Agent}\PYG{p}{)} \PYG{k}{RETURN} \PYG{+w}{ }\PYG{n}{team\char`\_{}member}\PYG{p}{,}\PYG{n}{file}\PYG{p}{,}\PYG{n}{external\char`\_{}contributor} \end{Verbatim} \section{Graph Visualization} \label{sec:graph_visualization} We visualize parts of the property graph that is derived from the provenance graph. We use a graph visualization that is \emph{readable} and \emph{faithful}~\cite{NguyenEadesHong2013,NguyenEades2017}. Using a Python script, we export the relevant nodes and edges from \textsc{Neo4j} and store them in intermediate files; specifically in CSV, JSON, and GraphML files, which we import into graph drawing software (Figure~\ref{fig:graph-visualization}). In the following, we use \textsc{Gephi}~\cite{BastianHeymannJacomy2009} to draw our graphs. \begin{figure}[ht] \centering \includegraphics[width=0.8\columnwidth]{images/graph-visualization-2.pdf} \caption{Querying and exporting graph data for visualization; there are multiple choices possible for graph drawing, such as \textsc{Gephi}, Python with \texttt{networkx} and \texttt{matplotlib}, or \textsc{Mathematica}.} \label{fig:graph-visualization} \end{figure} During querying and exporting for visualization, we map the property graph as follows: \begin{itemize} \item PROV elements \emph{entities}~$\vcenter{\hbox{\includegraphics[scale=0.12]{images/prov-entity.pdf}}}$~ (i.e., files) and \emph{agents}~$\vcenter{\hbox{\includegraphics[scale=0.12]{images/prov-agent.pdf}}}$~ (i.e., contributors) become graph nodes with two distinct colors. \item The relations CONTRIBUTES\_TO become edges, which color depends on the property \emph{role}. \end{itemize} For the \emph{coloring}~\cite{KarimKwonParkEtAl2019}, we choose distinct colors from two different qualitative color schemes generated by \textsc{ColorBrewer}~\cite{HarrowerBrewer2003}. Nodes use colors from the \emph{``3-class Set2''} schema\footnote{\url{https://colorbrewer2.org/?type=qualitative&scheme=Set2&n=3}}: files have a green color ({\color{graph_node_entity}{\Large $\bullet$}}) and contributors have an orange color ({\color{graph_node_agent}{\Large $\bullet$}}). Edges use colors from the \emph{``3-class Set1''} schema\footnote{\url{https://colorbrewer2.org/?type=qualitative&scheme=Set1&n=3}}: contributions from team members have a blue color ({\color{graph_edge_team}{$\longrightarrow$}}) and contributions from external contributors have a red color ({\color{graph_edge_contributor}{$\longrightarrow$}}). While the chosen colors are `print-friendly', they are not safe regarding color blindness. The \emph{size of nodes} are proportional to their degree. In our current approach, we generate two drawings for each project; one where we scale the node sizes according to the \emph{in-degree} of file nodes and a second one where we scale according to the \emph{out-degree} of contributors. For the \emph{layout} we experimented with layout algorithms that are implemented in \textsc{Gephi}, such as \emph{Fruchterman Reingold}~\cite{FruchtermanReingold1991}, the algorithms that comes with \textsc{Graphviz}~\cite{GansnerNorth2000}, and \emph{ForceAtlas2}~\cite{JacomyVenturiniHeymannEtAl2014}. \begin{table}[h] \caption{Statistics of four selected CWA repositories: \emph{Entities} are the number of files, \emph{Agents} are the number of developers (any role), \emph{Activities} are the number of commits, \emph{Team contributions} are the number of any contribution by CWA team members, \emph{External contr.} are the number of any contribution by external contributors, \emph{Nodes Vis} are the number of nodes in the graph drawing, and \emph{Edges Vis} are the number of edges in the graph drawing.} \label{tab:statistics} \centering \begin{tabular}{lccccccc} \toprule \textbf{GitHub Project} & \textbf{Entities} & \textbf{Agents} & \textbf{Activities} & \textbf{Team contr.} & \textbf{Ext. contr.} & \textbf{Nodes Vis} & \textbf{Edges Vis} \\ \midrule cwa-server & 4182 & 57 & 366 & 1088 & 849 & 491 & 1209 \\ cwa-documentation & 340 & 31 & 140 & 84 & 45 & 49 & 80 \\ cwa-app-android & 3672 & 56 & 379 & 571 & 1230 & 380 & 1261 \\ cwa-app-ios & 7552 & 53 & 1859 & 809 & 1107 & 287 & 909 \\ \bottomrule \end{tabular} \end{table} See Figure~\ref{fig:cwa-documentation-entity-in-degree} for an example of a graph drawing for an relatively small project using the \emph{ForceAtlas2} layout algorithm. \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth]{images/cwa-documentation-entity-in-degree.png} \caption{Files ({\color{graph_node_entity}{\Large $\bullet$}}) and contributors ({\color{graph_node_agent}{\Large $\bullet$}}) for the \texttt{cwa-documentation} project. Blue edges indicate file changes by team members ({\color{graph_node_agent}{\Large $\bullet$}}{\color{graph_edge_team}{$\longrightarrow$}}{\color{graph_node_entity}{\Large $\bullet$}}). Red edges indicate file changes by external contributors ({\color{graph_node_agent}{\Large $\bullet$}}{\color{graph_edge_contributor}{$\longrightarrow$}}{\color{graph_node_entity}{\Large $\bullet$}}). } \label{fig:cwa-documentation-entity-in-degree} \end{figure} \section{Graph Drawings for the Corona-Warn-App} \label{sec:example_cwa} The Corona-Warn-App (CWA) has been developed in a short time frame: development started in April 2020 and the app was released on \formatdate{16}{6}{2020} for Android and iOS\@. CWA is developed by SAP and Telekom using an open development process---publicly available from 12 repositories\footnote{\url{https://github.com/corona-warn-app}}. CWA has a decentralized architecture, accompanied by centrally-managed Java-based server applications to distribute findings about infected users and store test results uploaded by the laboratories. We selected four of the CWA projects for visualization, for which we stored the provenance in \textsc{Neo4j}\footnote{The database dump is available~\cite{Schreiber2020} as of \formatdate{27}{7}{2020}}. These projects differ in their projects statistics regarding number of files in the repository, number of contributing developers, number of commits, and number of files where both team members and external developers made changes---which all leads to different number of nodes and edges for the graph drawings (Table~\ref{tab:statistics}). For each project, we generate two graph drawings with \textsc{Gephi}\footnote{The \textsc{Gephi} file is available~\cite{Schreiber2020a}.} as described in Section~\ref{sec:graph_visualization}: one where we scale node sizes proportional to the \emph{in-degree} of file nodes (see Figures~\ref{fig:cwa-app-android-entity-in-degree} and~\ref{fig:cwa-app-ios-entity-in-degree}) and a second one where we scale proportional to the \emph{out-degree} of contributors (see Figures~\ref{fig:cwa-app-android-agent-out-degree} and~\ref{fig:cwa-app-ios-agent-out-degree}). \begin{figure*}[h] \centering \subfloat[\emph{Entity In-Degree}: Size of nodes according to in-degree \newline of nodes that represent files.]{\label{fig:cwa-app-android-entity-in-degree} \includegraphics[width=0.49\columnwidth]{images/cwa-app-android-entity-in-degree-arcs.png}} \subfloat[\emph{Agent Out-Degree}: Size of nodes according to out-degree \newline of nodes that represent contributors.]{\label{fig:cwa-app-android-agent-out-degree} \includegraphics[width=0.49\columnwidth]{images/cwa-app-android-agent-out-degree-arcs.png}} \caption{Files ({\color{graph_node_entity}{\Large $\bullet$}}) and contributors ({\color{graph_node_agent}{\Large $\bullet$}}) for the \texttt{cwa-app-android} project. Red edges indicate file changes by team members ({\color{graph_node_agent}{\Large $\bullet$}}{\color{graph_edge_contributor}{$\longrightarrow$}}{\color{graph_node_entity}{\Large $\bullet$}}). Blue edges indicate file changes by external contributors ({\color{graph_node_agent}{\Large $\bullet$}}{\color{graph_edge_team}{$\longrightarrow$}}{\color{graph_node_entity}{\Large $\bullet$}}). } \label{fig:cwa-app-android} \end{figure*} \begin{figure*}[h] \centering \subfloat[\emph{Entity In-Degree}: Size of nodes according to in-degree \newline of nodes that represent files.]{\label{fig:cwa-app-ios-entity-in-degree} \includegraphics[width=0.49\columnwidth]{images/cwa-app-ios-entity-in-degree-arcs.png}} \subfloat[\emph{Agent Out-Degree}: Size of nodes according to out-degree of nodes that represent contributors.]{\label{fig:cwa-app-ios-agent-out-degree} \includegraphics[width=0.49\columnwidth]{images/cwa-app-ios-agent-out-degree-arcs.png}} \caption{Files ({\color{graph_node_entity}{\Large $\bullet$}}) and contributors ({\color{graph_node_agent}{\Large $\bullet$}}) for the \texttt{cwa-app-ios} project. Red edges indicate file changes by team members ({\color{graph_node_agent}{\Large $\bullet$}}{\color{graph_edge_contributor}{$\longrightarrow$}}{\color{graph_node_entity}{\Large $\bullet$}}). Blue edges indicate file changes by external contributors ({\color{graph_node_agent}{\Large $\bullet$}}{\color{graph_edge_team}{$\longrightarrow$}}{\color{graph_node_entity}{\Large $\bullet$}}). } \label{fig:cwa-app-ios} \end{figure*} In our graph drawing, typical patterns are visible: team members and external contributors work collaboratively on many files. Because the drawing are based on provenance data, the interpretation is that over the time of development many files were changed by developers with different roles; where a small numbers of developers made most of the changes. Further, more detailed interpretations and studies of the graph drawing metrics for faithfulness and readability is ongoing work. \section{Related Work} \label{sec:related_work} There are many tools for dynamic history visualization of repository changes over time. A widely used tool is \textsc{Gource}\footnote{\url{https://gource.io}}, which generated movies that show changed files and developer activities. This different to our approach, since we visualize ``condensed'' information about the development history that is stored in the provenance data. Especially for visualizing social interaction in open-source software projects, Ogawa et al.~\cite{OgawaMaBirdEtAl2007} use an intuitive, time-series, interactive summary view of the social groups that form, evolve and vanish during the entire lifetime of the project. \section{Conclusion and Future Work} \label{sec:conclusion} We presented graph drawings to visually see how team members and external contributor worked on the same files in open-source projects over the course of development. Since our goal is better understanding of such development patterns, future work foremost is to conduct user studies to evaluate readability and faithfulness. The graph drawings surely can be improved in many ways, for example, with other layouts, color schemes (especially to support color blindness), transparency, or shapes. We plan, to apply our methods to other projects than CWA; especially, to huge projects with a very long development history. We plan to compare different projects, where the proportion of regular team member and external contributors is different. We already work on using the provenance data for non-visual analytics of open-source projects. For example, to investigate whether vulnerabilities are introduced by external contributors (e.g., via pull requests)---we apply static code analysis for revisions in development history determined on the provenance data~\cite{SonnekalbHeinzeKurnatowskiEtAl2020}. \bibliographystyle{plainurl}
proofpile-arXiv_059-15755
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} Human Action Recognition (HAR) is a very popular topic in computer vision. The popularity of this task is mainly due to its use in various real-world applications such as smart video surveillance, autonomous robots, virtual reality, sport video analysis and urban planning \cite{gaur2011string,sudha2017approaches, xia2015robot,ibrahim2016hierarchical}. The goal of HAR is to identify and classify human actions from video sequences that contain spatial and temporal information related to the performed human action. Actions can be complex (e.g., preparing a meal) or simpler (e.g., walking). In this work, we focus on atomic human actions (e.g., running, dancing, jumping). Despite great progress in the last few years, human action recognition is still a challenging task due to dynamic backgrounds, occlusion, varied people appearance and imaging conditions. In the last few years, there has been a lot of research based on deep learning to recognize human actions in videos~\cite{Karpathy2014LargeScaleVC, Simonyan2014TwoStreamCN, wang2016temporal, donahue2015long, zhou2018mict}. Since videos are 3D spatio-temporal signals, the main idea behind the majority of these studies is to extend Convolutional Neural Networks (CNNs) to include the temporal information contained in videos. Karpathy et al.~\cite{Karpathy2014LargeScaleVC} proposed several fusion techniques that slightly modify the CNN architectures to operate on stacked video frame inputs. As their results were similar to the results obtained by using individual RGB frames, these techniques were shown to not correctly model the temporal information. In order to operate in the spatio-temporal domain, Ji et al.~\cite{Ji20103DCN} proposed a 3D CNN model that performs 3D convolutions on stacked video frames to learn spatio-temporal information between consecutive frames. In addition to the fact that 3D CNNs perform similarly to 2D CNNs, they are computationally expensive to train because they contain many more parameters and do not model long range temporal information. In the same context, Simonyan et al.~\cite{Simonyan2014TwoStreamCN} proposed a two-stream CNN architecture that learns spatial appearance information from RGB frames and motion information between frames using optical flow. To improve this architecture that considers only a single frame as input, Ng et al.~\cite{Ng2015BeyondSS} and Wang et al.~\cite{Wang2016TemporalSN} proposed architectures that aggregate the convolutional features at different temporal and spatial positions. However, the streams in these two-stream CNN architectures are independent and there is no shared information between them. These architectures capture only the motion information in short time windows and do not guarantee to keep the most representative features with pooling techniques. Another line of research incorporates human pose sequences to represent actions as they provide valuable cues for the recognition task. Multiple studies proposed to recognize actions based on 3D poses \cite{du2015hierarchical, shahroudy2016ntu, liu2016spatio, wang2013approach}. However, these methods are less convenient for general cases, because they require special depth sensors. With the research progress in pose estimation over the past few years, some alternative approaches exploit 2D poses to recognize human actions \cite{jhuang2013towards, cheron2015p, choutas2018potion}. Similarly to the previously discussed approaches, 2D pose-based methods still represent actions by randomly leaned features. Also, they remain limited in the way they integrate the temporal information that is irrelevant with respect to the dynamic nature of human actions. To address the aforementioned problems, we propose a novel pose-based approach for human action recognition that learns the temporal discriminative features of actions by integrating them in a compact static representation. Based on key poses, our model consists in integrating the most representative appearance features in a single grid image to obtain a more relevant representation of the performed action. In order to restrict the analysis to only the most likely information related to the action, we only consider the human region in the scene in each frame. By fusing valuable appearance features with representative poses, our grid representation mimics an explicit attention mechanism that allows us to deal with some challenges related to real-world data including occlusions and intra-class appearance variations. Furthermore, we tested our approach, called GRAR (Grid-based Representation for Action Recognition) on several datasets and found that it yields competitive or state-of-the-art results for individual actions as well as collective activities when integrated into a bottom-up setting. The contributions of this paper are threefold: \begin{enumerate} \item We propose a new grid action representation that encodes only discriminative appearance features. \item We consider an explicit attention mechanism that highlights the representative poses of the action and can handle challenging situations, such as occlusions and intra-class variations. \item Experiments on three publicly available benchmark datasets demonstrate the effectiveness of our proposed model by achieving competitive results compared to the state-of-the-art. \end{enumerate} \section{Related work} \label{related_work} Our proposed work is related to three lines of research: deep learning-related action recognition, pose-based video action recognition and collective activity recognition in videos. In this section, we review notable studies related to these research areas and show how our proposed method differs from them. \subsection{Deep Learning-based Action Recognition} In recent years, deep learning methods have shown valuable capabilities in various computer vision applications ranging from image classification to action recognition in videos. CNNs \cite{lecun1998gradient} are regarded as a powerful class of models for the task of video action recognition. Simonyan et al. \cite{simonyan2014two} proposed to integrate spatial and temporal networks into a two-stream CNN architecture that is trained independently on inputs from static appearance and multi-frame dense optical flow. Similarly, based on a two-stream CNN architecture, Wang et al. \cite{wang2016temporal} introduced a model called TSN that learns video representations based on sparse temporal sampling to encode the long range temporal structure for a better understanding of the dynamics in action videos. A 3D CNN model that extends 2D convolutions to 3D convolutions is proposed by Ji et al. \cite{Ji20103DCN} to learn spatio-temporal information between stacked consecutive frames. The problem of 3D CNN is that they contain many more parameters, which makes them computationally expensive. Moreover they do not allow modeling of long range temporal information. Another widely adopted approach in this context is the use of Recurrent Neural Network (RNN) and its variants (e.g., LSTM and GRU). These networks have demonstrated an impressive performance in modeling long-term dependencies between frames. Donahue et al. \cite{donahue2015long} applied a long-term recurrent convolutional network to model visual time-series to recognize actions. In a different work, Du et al. \cite{du2017rpan} introduced a recurrent network based on pose and attention mechanisms, where the spatio-temporal evolution of the human pose is used to guide the process of recognizing human actions in videos. Recently, Li et al. \cite{li2018videolstm} proposed an end-to-end sequence learning framework for action classification that integrates attention via a convolutional LSTM network. \subsection{Pose-based Video Action Recognition} Human pose is considered as an appearance clue that can be leveraged to guide the process of action recognition in videos. In this context, Wang et al. \cite{wang2013approach} proposed to infer the best poses for each frame by extracting spatial-part-sets and temporal-part-sets using a contrast mining algorithm \cite{dong1999efficient}, where the output is then fed to a SVM classifier in order to recognize human actions in videos. Nie et al. \cite{xiaohan2015joint} proposed a similar approach based on a spatial-temporal And-Or graph hierarchical model that decomposes human actions into three levels including poses, spatio-temporal parts and body joints. Later, Zolfaghari et al. \cite{zolfaghari2017chained} integrate poses, motions, and raw images into a three-stream architecture to improve the action recognition performance. Recently, Choutas et al. \cite{choutas2018potion} proposed an approach called PoTion that jointly encodes appearance and motion of semantic keypoints into a clip-level representation serving as input for a shallow CNN. \subsection{Collective Activity Recognition} The past few years have witnessed an increasing interest by the research community about collective activity recognition \cite{ibrahim2016hierarchical},\cite{shu2017cern}, \cite{bagautdinov2017social}, \cite{qi2018stagnet}, \cite{biswas2018structural}. A notable work was introduced by Choi et al. \cite{choi2009they} where they described the activity of a person based on spatio-temporal descriptors in order to infer the high-level collective activity. Recently, multiple deep learning based models have been proposed in this context. A two-stage hierarchical temporal model was introduced by Ibrahim et al. \cite{ibrahim2016hierarchical} to recognize collective activities. In the first stage, they analyze the temporal dynamics of each person with an LSTM network. Then, they aggregate this information in the second stage with the encoded temporal group dynamic, in order to learn interactions between people that contributes to recognize collective activities. On the other hand, Deng et al. \cite{deng2016structure} proposed a framework that combines graphical models and deep neural networks. In this model, nodes are representing both people and the scene, which allows message passing between outputs. Later, StagNet was proposed by Qi et al. \cite{qi2018stagnet} where a semantic graph is used to model individual actions as well as their corresponding spatial relations, whereas the temporal interactions are modeled with a structural-RNN architecture. \begin{figure*}[ht] \centering \includegraphics[scale=0.36]{pipeline_final.pdf} \caption{The pipeline of our proposed GRAR model. From each frame, we extract human poses using a pose estimation method. We select then the most representative key poses for each human based on an unsupervised clustering method. These estimated key poses are combined with their corresponding RGB sources and then concatenated. This results in a compact grid representation $G_{h}$, that encodes the relevant RGB and pose information related to the performed action. Finally, we train a deep convolutional neural network on the obtained grid representations to predict the corresponding action category $A_{h}$.} \label{pipeline} \end{figure*} Different from these methods, we propose a novel action recognition approach that relies mainly on relevant poses to build an action representation instead of considering random features or frames. Additionally, with an attention guided by key poses, our approach is more robust to occlusion and intra-class appearance variation problems. Following a bottom-up design, our approach can be successfully leveraged to recognize activities based on individual actions. \section{Grid-based Representation For Human Action Recognition} \label{proposed_approach} In this section, we present the overall design of our novel GRAR model, whose ultimate goal is to recognize efficiently the actions of all persons in a video sequence, by combining important temporal features associated with poses. The overall pipeline is illustrated in Fig. \ref{pipeline}. To obtain a more relevant representation of the performed action, first of all, we select the human region in the scene in each frame, instead of taking the entire frame. This restricts the analysis to focus only on the most relevant information related to the person's action. Given the tracks of every person, we extract the sequence of the 2D human poses at each time step. Once the poses are extracted, we normalize the joint coordinates with respect to the bounding box position at each time $t$ to get their corresponding relative positions. Based on these normalized joint positions, we select the most representative poses of each action using an unsupervised clustering algorithm. This will allow us to keep only the information of interest about the performed action. We will refer to these representative poses as key poses in the rest of the paper. From the chosen key poses, we create our new static grid representation that integrates only the most discriminative temporal RGB and pose information, which allows our method to deal with periodicity in actions as well as occlusions and intra-class variations problems. Indeed, since we are using key poses, frames where the person is too much occluded will likely be ignored because they will result in unstructured or infrequent poses. Other times, poses can allow to extract information through occlusions (see Fig.~\ref{occlusions}). Finally, we recognize the performed actions by training a convolutional neural network on sets of these grid images. Because actions are represented in a grid image, we can benefit from pre-trained image classification networks. In the following sections, we describe in details the main components of our proposed model. Since we are interested mainly in human action recognition in this work, we assume that humans are already detected. \subsection{Human Pose Estimation} Human actions are highly correlated with their corresponding poses. A 2D human pose is a continuous representation of the body parts in the image space. Let $V_{h} = \{V_1,V_2,...,V_t,...,V_T\} \in {\rm I\!R}^{T \times M \times d}$ denote the pose sequence for a human $h$ in $T$ frames, where $d$ is the spatial dimensions and $M$ is the total number of human body keypoints (joints). The pose vector $V_t$ is defined as $V_t = \{w_{t1}, w_{t2},...,w_{tm},...,w_{tM}\}$ where $w_{tm}$ is the spatial coordinate of the keypoint $m$ in a single frame at time $t$. In this work, we use the recently published HighResolution Net (HRNet) pose estimation model~\cite{wang2020deep} for its good performance on small persons. HRNet is a top-down architecture based on repeated multi-scale fusions between parallel multi-resolution sub-networks. For computational purposes, in our proposed method, we use the HRNet-W32 network trained using the $MSE$ loss function defined as: \begin{equation} \mathcal{L}_{mse} = \frac{1}{M} \sum_{m=1}^{M} || C_m - \hat{C_m}||_{2}^{2} \end{equation} where $\hat{C_m}$ and $C_m$ are the predicted and the ground truth confidence map for the $m^{th}$ joint respectively. Given a sequence of video frames $S \in {\rm I\!R}^{T \times H \times W \times 3}$, the model predicts the pose sequence $V_h$ for a human $h$ based on the bounding box sequence $U_h \in {\rm I\!R}^{T_h \times H_h \times W_h \times 3}$. As we use the COCO keypoints format \cite{lin2014microsoft} for the 2D pose estimation, we consider $M=17$ and $d=2$. In some special cases, object detection and tracking methods can fail to estimate good quality bounding boxes. In the case of humans, this corresponds to a bounding box that does not cover all the body joints of the person of interest. To solve this issue, once we get the pose keypoint coordinates, we apply a bounding box refinement process, which consists in modifying the bounding box coordinates $U_{h}^{'} \in {\rm I\!R}^{T_h \times H_h^{'} \times W_h^{'} \times 3}$ to enclose the extreme joints coordinates. \subsection{Relevant Features Selection} Frame selection is a challenging task in action recognition in RGB videos. Considering all the frames or choosing some frames randomly to represent an action induce redundant or irrelevant information in the learning process, which is directly reflected on the final classification accuracy. In order to reduce the complexity of our proposed model and increase its generalization ability, we propose to focus on the most relevant information by extracting key poses, a subset of distinctive human poses for each action. For example, the distinctive poses for the action "running" can match frames where the right hand, and both the left knee and foot are all heading forward, in opposite directions to the left hand and the right knee and foot. At first, we make the keypoint sequence $V_h$ invariant to the position in the scene and the scale of the person of interest. Based on the corrected bounding box sequence $U{'}_{h}$, we transform the coordinates of each vector $V_t$ from the frame $t$ coordinate space to the corresponding bounding box coordinate space $U^{'}_{t}$. Then, we normalize the obtained coordinates with respect to the dimensions of the considered bounding boxes. Now that we constructed the $V^{'}_{h} \in {\rm I\!R}^{T \times M \times d}$ sequence for each person in the same normalized coordinate space, we proceed to cluster these keypoint sequences in order to extract the most discriminative poses, that are the key poses. To this end, we employ the well-known Partitioning Around Medoids (PAM) clustering algorithm~\cite{kaufman1987clustering} based on a pairwise dissimilarity metric. This method is shown to be more robust to outliers than the sum of squared Euclidean distances used by K-Means. This fact is also demonstrated by our experiments in Section~\ref{experiments}. Formally, given an action represented by the $V^{'}_{h}$ pose sequence, PAM provides us with a set of $K$ pose clusters $(C_1, C_2...C_K)$ along with their reference pose medoids $V_{h}^{*} = \{V_{k_{1}},V_{k_{2}},...,V_{k_{K}}\} \subset V^{'}_{h}$ and $V_{h}^{*} \in {\rm I\!R}^{K \times M \times d}$ (i.e., the most centrally located pose in a cluster), where both the intra-cluster poses similarity and the inter-cluster dissimilarity are maximized. The learning process of extracting key poses for each person performing a specific action is done as follows: at first, we randomly select a set of $K$ poses for each person, then we assign the remaining poses to the cluster with the most similar medoid to them. After that, we select an arbitrary non medoid pose $x$ and compute the cost of swapping the initial medoid with the new candidate medoid. The updated total cost is based on the L1 norm and is defined as: \begin{equation} V_{k_{1}},V_{k_{2}},...,V_{k_{K}} = argmin \sum_{i=1}^{K} \sum_{x \in C_{i}} || x - V_{k_{i}}||_{1} \end{equation} After convergence, the obtained poses medoids serve as the key poses, which will be used as the input feature vector for the next module. \subsection{Grid Representation Learning} Several research works in the literature have explored multiple fusion techniques to integrate temporal information \cite{Karpathy2014LargeScaleVC} and complementary modalities \cite{Simonyan2014TwoStreamCN} in order to improve the final action recognition accuracy. However, these methods remain limited in the way they integrate information in time. In addition, they rely on randomly selected information which results in less representative features. Different from these methods, our GRAR model is based only on relevant RGB information. Without requiring any additional annotations (i.e, pose, skeleton data), the estimated key poses are used as an explicit attention mechanism. By fusing temporal RGB and pose features into a grid image representation, our model efficiently encodes key patterns needed to recognize human actions. In order to create our new grid structure, we proceed as follows: For each key pose in $V_{h}^{*}$, we get the RGB information of interest $I_{{h}_k}$ based on its corresponding bounding box. In an explicit manner, we put attention on the key poses by modifying the pixels $p \subset I_{{h}_k}$ if $p \in v_{h_k}$ so that the pose vector is encoded directly in the selected relevant RGB region. This attention-guided by pose technique allows us to deal with challenging real-word situations such as intra-class variations and occlusions. Fig. \ref{occlusions} (b) shows an example where our model can successfully compensate missing information caused by occlusions. \begin{figure}[ht] \centering \includegraphics[scale=0.4]{occ_out.pdf} \caption{Examples from the Volleyball and CAE datasets of (a) a pose estimation failure case considered as outlier during key poses selection. (b) and (c) show occluded humans handled with our explicit attention on pose.} \label{occlusions} \end{figure} Afterwards, we put each fused $I_{{h}_k}$ in a separated cell forming our main grid structure $G_{h}$ (see Fig. \ref{grid} for an example of a grid image). Before grouping the obtained cells, we concatenate each cell with a zeros-valued border. Such operation is necessary to avoid learning unnecessary patterns created by adjacent cells, when the convolution kernel sizes are larger than the inter-cells spacing. Our experiments show that considering a 3 pixel-wide boundary is enough according to the filter sizes of the adopted CNN architecture. It is to note that all of the $I_{{h}_k}$ cells are not re-scaled, keeping them at their original resolution allows our model to learn jointly the features, as well as their corresponding distance from the camera. \begin{figure}[ht] \centering \includegraphics[scale=0.4]{grid.pdf} \caption{Example of a grid image from the CAE dataset.} \label{grid} \end{figure} Once the grids are created, we train a CNN to learn human action patterns from these sets of grids. We adopt the Inception-ResNet-v2 model \cite{szegedy2017inception} pre-trained on the ImageNet dataset \cite{deng2009imagenet} as our backbone CNN architecture. This model is a hybrid Inception network that uses residual connections rather than filter concatenation. We use the categorical cross entropy as our loss function which is given by: \begin{equation} \mathcal{L}_{CE} = - \frac{1}{N} \sum_{i=1}^{N} \log \frac{e^{w^{T}_{y_i}G_i+b_{y_i}}}{\sum_{j=1}^{n} e^{w^{T}_{j}G_i+b_{j}}} \end{equation} where $G_i$ refers to the $i^{th}$ grid representation, $N$ is the number of training grids, $y_i$ is the class label of $G_i$, W is the learned weight matrix and $b$ is the intercept. Considering only the human region in the scene instead of the whole frame information, allows us to combine a multitude of cues within the same image representation, without reducing excessively the original image resolution. Excessive downsampling of the image resolution to fall within the CNN input size, often results in loosing information valuable for action recognition. Moreover, being based solely on the human region with pose information maximizes the information closely related to the performed actions, and thus allows our model to generalize well to other scenes. For example, in case of a walking action, if our model is trained only with samples of humans walking on a sidewalk, in the inference phase it will be able to recognize that a human walks even if it is on the grass. This can be explained by the fact that the extracted deep features are more related to the human itself than to his environment. \section{Experiments} \label{experiments} In this section, we evaluate the performance of our proposed model. We first present the considered benchmark datasets and the implementation details. Then we report the results of a series of ablation studies to analyze the impact of each component of our GRAR model on the recognition performance, followed by a comparison with the state-of-the-art. \subsection{Datasets} We evaluate our model on three publicly available datasets: the Collective Activity dataset~\cite{choi2009they}, the Collective Activity Extended dataset~\cite{choi2011learning} and the Volleyball dataset~\cite{ibrahim2016hierarchical}. \subsubsection{Collective Activity dataset (CA) \cite{choi2009they}} The Collective Activity dataset is a popular dataset for both individual action and group activity recognition. It contains 44 video sequences with a resolution of 640$\times$480 pixels from 5 individual actions and group activity categories (talking, crossing, queuing, waiting, walking). The collective activity label of a scene is defined based on the performed action of the majority of individuals. For train/test split, we follow the same evaluation protocol suggested in~\cite{choi2009they}. \subsubsection{Collective Activity Extended dataset (CAE) \cite{choi2011learning}} The Collective Activity Extended dataset is an extended version of the original Collective Activity dataset where the "Walking" activity was replaced by two new activities "Jogging" and "Dancing". The reason why "Walking" was removed is because in some scenarios this activity is mixed with the Crossing activity. To train our model, we followed the testing scheme mentioned in~\cite{deng2016structure} and use $\frac{2}{3}$ of the videos for training and the rest for testing. \subsubsection{Volleyball dataset \cite{ibrahim2016hierarchical}} The Volleyball dataset contains 4830 frames collected from 55 YouTube videos and are all about Volleyball games. Each player is labeled with one of these actions: moving, spiking, waiting, blocking, jumping, setting, falling, digging and standing. We adopted the same testing setup used in~\cite{ibrahim2016hierarchical}, where $\frac{2}{3}$ of the data is used for training and $\frac{1}{3}$ for testing. \subsection{Implementation Details} We implemented our proposed model using the TensorFlow library~\cite{abadi2016tensorflow}. We use Inception-ResNet-v2~\cite{szegedy2017inception} as our backbone CNN architecture pre-trained on the ImageNet dataset~\cite{deng2009imagenet}. This network consists in 164 layers, with an image input size of 299$\times$299. For all the experiments and datasets, we utilized stochastic gradient descent with ADAM \cite{kingma2014adam} and set the optimizer hyperparameters to $\beta_1$ = 0.9, $\beta_2$ = 0.999, $\epsilon$ = 0.001. For the CA and CAE datasets, we used 4 key poses and trained the model in 100 epochs with a minibatch size of 16 and an initial learning rate starting from $10^{-3}$ then reduced with a factor of 0.2 after 10 patience epochs. For the Volleyball dataset, we used 6 key poses and trained the network with a learning rate of $10^{-5}$ for 130 epochs with a mini batch size of 32. To track humans in the scene, for the Volleyball dataset, we used the tracker proposed by Cao et al. \cite{7410695}, which is implemented in the Dlib library \cite{lan2011discriminative}. For the CA and CAE datasets, we used the tracklets provided by \cite{choi2009they}. We used the HighResolution Net (HRNet) algorithm \cite{wang2020deep} to compute human postures across frames. Specifically, we used the pose-hrnet-w32 architecture trained on COCO dataset. All our experiments were run on a single TITAN Xp NVIDIA GPU. \subsection{Ablation Studies} In order to explore the effect of every component of our model on the performance, we conducted extensive ablation studies on the CAE dataset with the following variants. \paragraph*{Clustering Analysis} We start by studying the impact of different clustering settings on the recognition performance. The goal here is to find key poses forming the discriminative grid representation. We compare three clustering methods, namely, PAM, K-means and Gaussian Mixture Model (GMM) estimated with the Expectation-Maximization algorithm. For action representation learning, we used the same CNN architecture and parameters settings. The results are reported in table \ref{Tab0}. \begin{table}[ht] \caption{Impact of the clustering algorithms and number of key poses on the performance of our model on the CAE dataset. \textbf{Boldface: Best result.}} \label{Tab0} \centering \begin{tabular}{l c c c} \cline{2-4} \cline{2-4} & K-means & PAM & GMM \\ \hline K = 2 & 94.0\% & 94.6\% & 90.3\% \\ K = 4 & 94.5\% & \textbf{95.2}\% & 91.7\% \\ K = 6 & 92.6\% & 93.9\% & 91.9\% \\ \hline \end{tabular} \end{table} Compared with K-means and GMM, we can see that PAM gives the highest accuracy 95.2\%. This can be explained by the fact that PAM is robust against outliers. In fact, it minimizes the average dissimilarity of human poses in each cluster, rather than minimizing the squared sum of each intra-cluster as adopted by K-means. It is important to highlight that the poses that we use in our model are not manually labeled but instead, are predicted with the HRNet model that can fail sometimes to estimate high quality poses. Fig. \ref{occlusions} (a) illustrates an example of a failure case in pose estimation, which is considered in our study as an outlier pose. Compared to PAM and K-means, GMM achieves the worst recognition rate in this experiment. This can be explained by the fact that the nature of our input pose data is not normally distributed. Additionally, we evaluated the effect of the number of key poses on the recognition performance. In this experiment, we consider several numbers of key poses, starting from two to six. Intuitively, one can say that the more key poses we consider, the more we gain information about the performed action. However, our evaluation on the datasets demonstrates that above a certain number of key poses, the recognition accuracy starts to decrease. We found that the performance is directly related to the input size of the backbone CNN model that we are using. In fact, by downsampling images to fit the input size of the CNN, some important cues for the recognition task are usually lost. So far, our findings demonstrate that larger downsampling rate makes performance poorer. After extensive experiments, we concluded that the number of key poses must be chosen based on the average size of humans in the scene. Furthermore, to keep the original full resolution for each key pose image, our grid must have a resolution that is close enough to the default CNN input size. \paragraph*{Features selection} Next, we study the importance of features selection for human action recognition. We compared random selection against pose-based selection for grid representation. For pose-based selection, we also compared two strategies, using only the poses features (K-Pose, in this case, key poses are drawn in white over a black background) and using only the RGB image corresponding to a key pose (K-RGB). For action representation learning, we employed the same CNN architecture in the three experiments. The results are reported in table \ref{Tab1}. Compared with random selection, choosing RGB information based on human pose (K-RGB) gives an improvement of 3.1\%. This demonstrates that using the pose estimation to select the RGB data provides us with relevant and discriminative information for human action recognition. On the other hand, using only pose features (K-Pose) with a standard CNN model do not yield good results. Such experience reveals that despite the pose containing valuable cues about the performed action, using it solely does not provide the standard 2D CNN with enough significant features for recognition. \begin{table}[ht] \caption{Impact of different modules on the accuracy of GRAR based on CAE dataset. \textbf{Boldface: Best result.}} \label{Tab1} \centering \resizebox{0.5\textwidth}{!}{\begin{tabular}{l c} \hline Model Variants & Accuracy\\ \hline Random & 89.2\% \\ Key poses only (K-Pose) & 80.5\% \\ Key Frame (K-RGB) & 92.3\% \\ Key Frame+Box enhancement (K-RGB+EB) & 92.9\% \\ Key Frame+Box enhancement+Pose Attention (K-RGB+EB+PA) & \textbf{95.2}\% \\ \hline \end{tabular} } \end{table} \paragraph*{Bounding Boxes Enhancement (EB)} Here, we evaluate the effectiveness of making use of the human pose to correct inaccurate bounding boxes used in GRAR pipeline. As illustrated in Table \ref{Tab1} at the K-RGB and K-RGB+EB rows, we can see that correcting the human bounding boxes give us 0.6\% of improvement. This emphasizes the importance of incorporating human joint features to enhance bounding boxes quality, which is useful not only for the action recognition task, but could be useful to other computer vision problems. \paragraph*{Pose Attention (PA)} Finally, we study the impact of the introduced pose attention technique, where a key pose is drawn over the corresponding RGB image. We compared the recognition performance of K-RGB+EB against using K-RGB+EB combined with pose-based attention. In both experiments, we considered the enhanced version of the bounding boxes (EB). As indicated in Table \ref{Tab1} at the K-RGB+EB and K-RGB+EB+PA rows, we can conclude that putting explicit pose-attention on appearance representation improves the recognition performance by around 2.3\%. This indicates that if the estimated pose is of good quality, an attention mechanism based on it makes the model more robust against intra-class variations and occlusions problems, as explained in Fig. \ref{occlusions}. \subsection{Comparison with the State-of-the-Art} In this section, we compare the performance of our GRAR model with respect to several state-of-the-art methods including Learning context \cite{choi2011learning}, Social Cues for activity recognition \cite{tran2013social}, Hierarchical Deep Temporal Model \cite{ibrahim2016hierarchical}, Structure Inference Machines \cite{deng2016structure}, CERN \cite{shu2017cern}, StagNet \cite{qi2018stagnet}, Fast collective activity \cite{zhang2019fast_art}, Gaim \cite{lu2019gaim}, ARG \cite{wu2019learning}, SSU \cite{bagautdinov2017social} and SRNN \cite{biswas2018structural}. \subsubsection{Results on the Collective Activity dataset} Now that we evaluated multiple variants of our model for individual action recognition, our goal here is to explore the ability of our model to recognize collective activities based on the individual ones. As previously done, we derive the collective activity label of a scene based on the performed action of the majority of individuals. Moreover, we do not use any ground truth annotations about the pose. Table \ref{cad} summarizes the state-of-the-art performance on the Collective Activity dataset (CA). Our model with the pose-based grid representation outperforms the compared state-of-the-art methods. For example, our model achieves $\approx$10\% higher accuracy than recent methods based on hierarchical relational networks \cite{ibrahim2016hierarchical} and recurrent neural networks for activity recognition \cite{deng2016structure}. This is mostly because we focus primarily on highly discriminative RGB features along with their corresponding poses. \begin{table}[!t] \caption{Comparison of the activity recognition performance of state-of-the-art methods versus our model evaluated on CA dataset. \textbf{Boldface: Best result.}} \label{cad} \centering \begin{tabular}{l c} \hline Method & Accuracy\\ \hline Choi et al. \cite{choi2011learning} & 70.9\% \\ Tran et al. \cite{tran2013social} & 78.7\% \\ Ibrahim et al. \cite{ibrahim2016hierarchical} & 81.5\% \\ Deng et al. \cite{deng2016structure} & 81.2\% \\ Shu et al. \cite{shu2017cern} & 87.2\%\\ Qi et al. \cite{qi2018stagnet} & 89.1\% \\ Zhang et al. \cite{zhang2019fast_art} & 83.8\% \\ Lu et al. \cite{lu2019gaim} & 90.6\% \\ Wu et al. \cite{wu2019learning} & 91.0\% \\ \hline GRAR (Ours) & \textbf{91.5}\% \\ \hline \end{tabular} \end{table} \begin{table}[ht] \caption{Comparison of the activity recognition performance of state-of-the-art methods versus our model evaluated on CAE dataset. \textbf{Boldface: Best result.}} \label{cade} \centering \begin{tabular}{l c} \hline Method & Accuracy\\ \hline Choi et al. \cite{choi2011learning} & 82.0\% \\ Tran et al. \cite{tran2013social} & 80.7\% \\ Ibrahim et al. \cite{ibrahim2016hierarchical} & 94.2\% \\ Deng et al. \cite{deng2016structure} & 90.2\% \\ Qi et al. \cite{qi2018stagnet} & 89.7\% \\ Lu et al. \cite{lu2019gaim} & 91.2\% \\ Zhang et al. \cite{zhang2019fast_art} & 96.2\% \\ \hline GRAR (Ours) & \textbf{97.4}\% \\ \hline \end{tabular} \end{table} \subsubsection{Results on the Collective Activity Extended dataset} The experimental results of human activity recognition on the CAE dataset are shown in Table \ref{cade}. Our pose-based grid model again achieves the state-of-the-art performance with 97.4\% collective activity recognition accuracy. This performance demonstrates the effectiveness of choosing relevant RGB information and incorporating key pose features as an explicit attention mechanism in compensating the model weakness when facing ambiguous RGB appearances. \subsubsection{Results on the Volleyball dataset} We further conducted experiments on the Volleyball dataset. Table \ref{volley} shows the comparison of our proposed model with different recent state-of-the-art methods for individual action recognition. As can be seen, our GRAR model outperforms most of the state-of-the-art methods \cite{ibrahim2016hierarchical, shu2017cern, bagautdinov2017social, qi2018stagnet, biswas2018structural} with an accuracy of 82.9\%. It is also highly competitive to the ARG method \cite{wu2019learning}, mostly because this latter uses a graph convolutional network that encodes complex actor relation. \begin{table}[!t] \caption{Evaluation of action recognition performance of state-of-the-art methods versus our proposed model on the Volleyball dataset. \textbf{Boldface: Best result.}} \label{volley} \centering \begin{tabular}{l c} \hline Method & Accuracy\\ \hline Ibrahim et al. \cite{ibrahim2016hierarchical} & 75.9\% \\ Shu et al. \cite{shu2017cern} & 69.0\%\\ Bagautdinov et al. \cite{bagautdinov2017social} & 82.4\% \\ Qi et al. \cite{qi2018stagnet} & 81.9\% \\ Biswas et al. \cite{biswas2018structural} & 76.6\% \\ Wu et al. \cite{wu2019learning} & \textbf{83.1}\%\\ \hline GRAR (Ours) & 82.9\% \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{conclusion} In this paper, we have presented GRAR, a novel pose-based model for human action recognition that uses a grid image of key poses. Our results consistently demonstrate that selecting RGB appearance based on the most discriminative human poses and combining them together in an image leads to considerable improvements. We obtained promising results compared to state-of-the-art approaches on three public benchmark datasets. Our proposed method has several benefits: 1) it is compact, 2) it exploits powerful CNN architectures designed for image classification tasks without requiring any architectural changes, and 3) it is robust against occlusions, intra-class action variations and incorrect human poses estimation. \section*{Acknowledgment} This work was supported by the National Sciences and Engineering Research Council of Canada (NSERC). We thank NVIDIA Corporation for their donation of a Titan Xp GPU card. \bibliographystyle{IEEEtran}
proofpile-arXiv_059-15756
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Statement of the Potential Broader Impact} In this work, we study the problem of {\it imbalanced learning} (IL), which is a common problem related to machine learning and data mining. Such a problem widely exists in many real-world application domains such as finance, security, biomedical engineering, industrial manufacturing, and information technology~\cite{haixiang2017learning-from-imb-review}. IL methods, including the proposed {\sc Mesa} framework in this paper, aim to fix the bias of learning models introduced by skewed training class distribution. We believe that proper usage of these techniques will lead us to a better society. For example, better IL techniques can detect phishing websites/fraud transactions to protect people's property, and help doctors diagnose rare diseases/develop new medicines to save people's lives. With that being said, we are also aware that using these techniques improperly can cause negative impacts, as misclassification is inevitable in most of the learning systems. In particular, we note that when deploying IL systems in medical-related domains, misclassification (e.g., failure to identify a patient) could lead to medical malpractice. In such domains, these techniques should be used as auxiliary systems, e.g., when performing diagnosis, we can adjust the classification threshold to achieve higher recall and use the predicted probability as a reference for the doctor's diagnosis. While there are some risks with IL research, as we mentioned above, we believe that with proper usage and monitoring, the negative impact of misclassification could be minimized and IL techniques can help people live a better life. \section{Conclusion} \label{section:conclusion} We propose a novel imbalanced learning framework {\sc Mesa}. It contains a meta-sampler that adaptively selects training data to learn effective cascade ensemble classifiers from imbalanced data. Rather than following random heuristics, {\sc Mesa} directly optimizes its sampling strategy for better generalization performance. Compared with prevailing meta-learning IL solutions that are limited to be co-optimized with DNNs, {\sc Mesa} is a generic framework capable of working with various learning models. Our meta-sampler is trained over task-agnostic meta-data and thus can be transferred to new tasks, which greatly reduces the meta-training cost. Empirical results show that {\sc Mesa} achieves superior performance on various tasks with high sample efficiency. In future work, we plan to explore the potential of meta-knowledge-driven ensemble learning in the long-tail multi-classification problem. \section{Experiments} \label{section:experiments} To thoroughly assess the effectiveness of {\sc Mesa}, two series of experiments are conducted: one on controlled synthetic toy datasets for visualization and the other on real-world imbalanced datasets to validate {\sc Mesa}'s performance in practical applications. We also carry out extended experiments on real-world datasets to verify the robustness and cross-task transferability of {\sc Mesa}. \subsection{Experiment on Synthetic Datasets} {\bf Setup Details.} We build a series of imbalanced toy datasets corresponding to different levels of underlying class distribution overlapping, as shown in Fig.~\ref{fig:visualization}. All the datasets have the same imbalance ratio\footnote{Imbalance ratio (IR) is defined as $\mathcal{|N|/|P|}$.} ($|\mathcal{N}|/|\mathcal{P}|=2,000/200=10$). In this experiment, {\sc Mesa} is compared with four representative EIL algorithms from 4 major EIL branches (Parallel/Iterative Ensemble + Under/Over-sampling), i.e., {\sc SmoteBoost~\cite{chawla2003smoteboost}, SmoteBagging~\cite{wang2009smotebagging}, RusBoost~\cite{seiffert2010rusboost}}, and {\sc UnderBagging~\cite{barandela2003underbagging}}. All EIL methods are deployed with decision trees as base classifiers with ensemble size of $5$. {\bf Visualization \& Analysis.} We plot the input datasets and the decision boundaries learned by different EIL algorithms in Fig.{fig:visualization}, which shows that {\sc Mesa} achieves the best performance under different situations. We can observe that: all tested methods perform well on the less-overlapped dataset (1st row). Note that random under-sampling discards some important majority samples (e.g., data points at the right end of the ``$\cap$''-shaped distribution) and cause information loss. This makes the performance of {\sc RusBoost} and {\sc UnderBagging} slightly weaker than their competitors. As overlapping intensifies (2nd row), an increasing amount of noise gains high sample weights during the training process of boosting-based methods, i.e., {\sc SmoteBoost} and {\sc RusBoost}, thus resulting in poor classification performance. Bagging-based methods, i.e., {\sc SmoteBagging} and {\sc UnderBagging}, are less influenced by noise but they still underperform {\sc Mesa}. Even on the extremely overlapped dataset (3rd row), {\sc Mesa} still gives a stable and reasonable decision boundary that fits the underlying distribution. This is because the meta-sampler can adaptively select informative training subsets towards the good prediction performance while being robust to noise/outliers. All the results show the superiority of our {\sc Mesa} to other traditional EIL baselines in handling the distribution overlapping, noises, and poor minority class representation. \begin{figure}[t] \centering \parbox{.5\textwidth}{ \includegraphics[width=\linewidth]{figures/visualization.png} } \hspace{\fill} \parbox{.48\textwidth}{ \vspace{5pt} \caption{ Comparisons of {\sc Mesa} with 4 representative traditional EIL methods ({\sc SmoteBoost~\cite{chawla2003smoteboost}, SmoteBagging~\cite{wang2009smotebagging}, RusBoost~\cite{seiffert2010rusboost}} and {\sc UnderBagging~\cite{barandela2003underbagging}}) on 3 toy datasets with different levels of underlying class distribution overlapping (less/mid/highly-overlapped in 1st/2nd/3rd row). The number in the lower right corner of each subfigure represents the AUCPRC score of the corresponding classifier. Best viewed in color. } \label{fig:visualization} } \end{figure} \subsection{Experiment on Real-world Datasets} {\bf Setup Details.} In order to verify the effectiveness of {\sc Mesa} in practical applications, we extend the experiments to real-world imbalanced classification tasks from the UCI repository~\cite{Dua2019uci} and KDD CUP 2004. To ensure a thorough assessment, these datasets vary widely in their properties, with the imbalance ratio (IR) ranging from 9.1:1 to 111:1, dataset sizes ranging from 531 to 145,751, and number of features ranging from 6 to 617 (Please see Table~\ref{table:datasets} in Section~\ref{section:implementation-details} for detailed information). For each dataset, we keep-out the 20\% validation set and report the result of 4-fold stratified cross-validation (i.e., 60\%/20\%/20\% training/validation/test split). The performance is evaluated using the area under the precision-recall curve (AUCPRC)\footnote{All results are averaged over 10 independent runs.}, which is an unbiased and more comprehensive metric for class-imbalanced tasks compared to other metrics such as F-score, ROC, and accuracy~\cite{davis2006aucprc}. \begin{table*}[t] \centering \tiny \caption{ Comparisons of {\sc Mesa} with other representative resampling methods. } \label{table:comparison-resampling} \begin{tabular}{c|c|ccccc|cc} \toprule \multirow{2}*{Category} & \multirow{2}*{Method} & \multicolumn{5}{c|}{Protein Homo. (IR=111)} & \#Training & Resampling\\ \cline{3-7} & & KNN & GNB & DT & AdaBoost & GBM & Samples & Time (s)\\ \hline No resampling & {\sc - } & 0.466 & 0.742 & 0.531 & 0.778 & 0.796 & 87,450 & - \\ \hline \multirow{2}*{Under-sampling} & {\sc RandomUS} & 0.146 & 0.738 & 0.071 & 0.698 & 0.756 & 1,554 & 0.068 \\ & {\sc NearMiss~\cite{mani2003nearmiss}} & 0.009 & 0.012 & 0.012 & 0.400 & 0.266 & 1,554 & 3.949 \\ \hline \multirow{5}*{Cleaing-sampling} & {\sc Clean~\cite{laurikkala2001ncr}} & 0.469 & 0.744 & 0.488 & 0.781 & 0.811 & 86,196 & 117.739 \\ & {\sc ENN~\cite{wilson1972enn}} & 0.460 & 0.744 & 0.532 & 0.789 & 0.817 & 86,770 & 120.046 \\ & {\sc TomekLink~\cite{tomek1976tomeklink}} & 0.466 & 0.743 & 0.524 & 0.778 & 0.791 & 87,368 & 90.633 \\ & {\sc AllKNN~\cite{tomek1976allknn}} & 0.459 & 0.744 & 0.542 & 0.789 & 0.816 & 86,725 & 327.110 \\ & {\sc OSS~\cite{kubat1997oss}} & 0.466 & 0.743 & 0.536 & 0.778 & 0.789 & 87,146 & 92.234 \\ \hline \multirow{4}*{Over-sampling} & {\sc RandomOS} & 0.335 & 0.706 & 0.505 & 0.736 & 0.733 & 173,346 & 0.098 \\ & {\sc Smote~\cite{chawla2002smote}} & 0.189 & 0.753 & 0.304 & 0.700 & 0.719 & 173,346 & 0.576 \\ & {\sc ADASYN~\cite{he2008adasyn}} & 0.171 & 0.679 & 0.315 & 0.717 & 0.693 & 173,366 & 2.855 \\ & {\sc BorderSmote~\cite{han2005borderline-smote}} & 0.327 & 0.743 & 0.448 & 0.795 & 0.711 & 173,346 & 2.751 \\ \hline \multirow{2}*{Over-sampling + Cleaning} & {\sc SmoteENN~\cite{batista2004smoteenn}} & 0.156 & 0.750 & 0.308 & 0.711 & 0.750 & 169,797 & 156.641 \\ & {\sc SmoteTomek~\cite{batista2003smotetomek}} & 0.185 & 0.749 & 0.292 & 0.782 & 0.703 & 173,346 & 116.401 \\ \hline \multirow{1}*{Meta-sampler} & {\sc Mesa (OURS, $k$=10)} & {\bf 0.585} & {\bf 0.804} & {\bf 0.832} & {\bf 0.849} & {\bf 0.855} & {\it 1,554$\times$10} & {\it 0.235$\times$10} \\ \bottomrule \end{tabular} \end{table*} {\bf Comparison with Resampling Imbalanced Learning (IL) Methods.} We first compare {\sc Mesa} with resampling techniques, which have been widely used in practice for preprocessing imbalanced data~\cite{haixiang2017learning-from-imb-review}. We select 12 representative methods from 4 major branches of resampling-based IL, i.e, under/over/cleaning-sampling and over-sampling with cleaning-sampling post-process. We test all methods on the challenging highly-imbalanced (IR=111) {\it Protein Homo.} task to check their efficiency and effectiveness. Five different classifiers, i.e., K-nearest neighbor (KNN), Gaussian Na\"ive Bayes (GNB), decision tree (DT), adaptive boosting (AdaBoost), and gradient boosting machine (GBM), were used to collaborate with different resampling approaches. We also record the number of samples used for model training and the time used to perform resampling. Table~\ref{table:comparison-resampling} details the experiment results. We show that by learning an adaptive resampling strategy, {\sc Mesa} outperforms other traditional data resampling methods by a large margin while only using a small number of training instances. In such a highly imbalanced dataset, the minority class is poorly represented and lacks a clear structure. Thus over-sampling methods that rely on relations between minority objects (like {\sc Smote}) may deteriorate the classification performance, even though they generate and use a huge number of synthetic samples for training. On the other hand, under-sampling methods drop most of the samples according to their rules and results in significant information loss and poor performance. Cleaning-sampling methods aim to remove noise from the dataset, but the resampling time is considerably high and the improvement is trivial. \begin{table*}[t] \centering \tiny \caption{ Comparisons of {\sc Mesa} with other representative under-sampling-based EIL methods. } \label{table:comparison-under-ensemble} \begin{tabular}{c|ccc|ccc|ccc|ccc} \toprule \multirow{2}*{Method} & \multicolumn{3}{c|}{Optical Digits (IR=9.1)} & \multicolumn{3}{c|}{Spectrometer (IR=11)} & \multicolumn{3}{c|}{ISOLET (IR=12)} & \multicolumn{3}{c}{Mammography (IR=42)} \\ \cline{2-13} & $k$=5 & $k$=10 & $k$=20 & $k$=5 & $k$=10 & $k$=20 & $k$=5 & $k$=10 & $k$=20 & $k$=5 & $k$=10 & $k$=20 \\ \hline {\sc RusBoost~\cite{seiffert2010rusboost}} & 0.883 & 0.946 & 0.958 & 0.686 & 0.784 & 0.786 & 0.696 & 0.770 & 0.789 & 0.348 & 0.511 & 0.588 \\ {\sc UnderBagging~\cite{barandela2003underbagging}} & 0.876 & 0.927 & 0.954 & 0.610 & 0.689 & 0.743 & 0.688 & 0.768 & 0.812 & 0.307 & 0.401 & 0.483 \\ {\sc SPE~\cite{liu2019self-paced-ensemble}} & 0.906 & 0.959 & 0.969 & 0.688 & 0.777 & 0.803 & 0.755 & 0.841 & 0.895 & 0.413 & 0.559 & 0.664 \\ {\sc Cascade~\cite{liu2009ee-bc}} & 0.862 & 0.932 & 0.958 & 0.599 & 0.754 & 0.789 & 0.684 & 0.819 & 0.891 & 0.404 & 0.575 & 0.670 \\ \hline {\sc Mesa (OURS)} & {\bf 0.929} & {\bf 0.968} & {\bf 0.980} & {\bf 0.723} & {\bf 0.803} & {\bf 0.845} & {\bf 0.787} & {\bf 0.877} & {\bf 0.921} & {\bf 0.515} & {\bf 0.644} & {\bf 0.705} \\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figures/comparison_over_sampling.pdf} \caption{ Comparisons of {\sc Mesa} with other representative over-sampling-based EIL methods. } \label{fig:cmp-over-ensemble} \end{figure*} {\bf Comparison with Ensemble Imbalanced Learning (EIL) Methods.} We further compare {\sc Mesa} with 7 representative EIL methods on four real-world imbalanced classification tasks. The baselines include 4 under-sampling based EIL methods, i.e., {\sc RusBoost~\cite{seiffert2010rusboost}}, {\sc UnderBagging~\cite{barandela2003underbagging}}, {\sc SPE~\cite{liu2019self-paced-ensemble}}, {\sc Cascade~\cite{liu2009ee-bc}}, and 3 over-sampling-based EIL methods, i.e., {\sc SmoteBoost~\cite{chawla2003smoteboost}, SmoteBagging~\cite{wang2009smotebagging}} and {\sc RamoBoost~\cite{chen2010ramoboost}}. We use the decision tree as the base learner for all EIL methods following the settings of most of the previous works~\cite{haixiang2017learning-from-imb-review}. We report the AUCPRC score of various USB-EIL methods with different ensemble sizes ($k$=5, 10, 20) in Table~\ref{table:comparison-under-ensemble}. The results show that {\sc Mesa} achieves competitive performance on various real-world tasks. For the baseline methods, we can observe that {\sc RusBoost} and {\sc UnderBagging} suffer from information loss as random under-sampling may discard samples with important information, and such effect is more apparent on highly imbalanced task. In comparison, the improved sampling strategies of {\sc SPE} and {\sc Cascade} enable them to achieve relatively better performance but still underperform {\sc Mesa}. Moreover, as {\sc Mesa} provides an adaptive resampler that makes the ensemble training converge faster and better, its advantage is particularly evident when using small ensemble in the highly-imbalanced task. On the {\it Mammography} dataset (IR=42), compared with the second-best score, {\sc Mesa} achieved 24.70\%/12.00\%/5.22\% performance gain when $k$=5/10/20, respectively. We further compare {\sc Mesa} with 3 OSB-EIL methods. As summarized in Table~\ref{table:comparison}, OSB-EIL methods typically use much more (1-2$\times$IR times) data to train each base learner than their under-sampling-based competitors, including {\sc Mesa}. Thus it is unfair to directly compare {\sc Mesa} with over-sampling-based baselines with the same ensemble size. Therefore, we plot the performance curve with regard to the number of instances used in ensemble training, as shown in Fig.~\ref{fig:cmp-over-ensemble}. It can be observed that our method {\sc Mesa} consistently outperforms over-sampling-based methods, especially on highly imbalanced/high-dimensional tasks (e.g., ISOLET with 617 features, Mammo. with IR=42). {\sc Mesa} also shows high sample efficiency and faster convergence speed. Compared with the baselines, it only requires a few training instances to converge to a strong ensemble classifier. {\sc Mesa} also has a more stable training process. The OSB-EIL methods perform resampling by analyzing and reinforcing the structure of minority class data. When the dataset is small or highly-imbalanced, the minority class is usually under-represented and lacks a clear structure. The performance of these OSB-EIL methods thus becomes unstable under such circumstances. \begin{table}[t] \centering \tiny \caption{Cross-task transferability of the meta-sampler.} \label{table:cross-task-results} \begin{tabular}{c|cccc|cccc} \toprule \multirow{2}*{\diagbox[width=8em]{Meta-train}{Meta-test}} & \multicolumn{4}{c|}{Mammography (IR=42, 11,183 instances)} & \multicolumn{4}{c}{Protein Homo. (IR=111, 145,751 instances)} \\ \cline{2-9} & $k$=10 & $\Delta$ & $k$=20 & $\Delta$ & $k$=10 & $\Delta$ & $k$=20 & $\Delta$ \\ \hline 100\% & 0.644$\pm$0.028 & baseline & 0.705$\pm$0.015 & baseline & 0.840$\pm$0.009 & baseline & 0.874$\pm$0.008 & baseline \\ 50\% subset & 0.642$\pm$0.032 & -0.30\% & 0.702$\pm$0.017 & -0.43\% & 0.839$\pm$0.009 & -0.12\% & 0.872$\pm$0.009 & -0.23\% \\ 10\% subset & 0.640$\pm$0.031 & -0.62\% & 0.700$\pm$0.017 & -0.71\% & 0.839$\pm$0.008 & -0.10\% & 0.871$\pm$0.006 & -0.34\% \\ \hline Optical Digits & 0.637$\pm$0.029 & -1.09\% & 0.701$\pm$0.015 & -0.57\% & 0.839$\pm$0.006 & -0.12\% & 0.870$\pm$0.006 & -0.46\% \\ Spectrometer & 0.641$\pm$0.025 & -0.54\% & 0.697$\pm$0.021 & -1.13\% & 0.836$\pm$0.009 & -0.48\% & 0.870$\pm$0.006 & -0.46\% \\ \bottomrule \end{tabular} \end{table} {\bf Cross-task Transferability of the Meta-sampler.} \label{subsubsection:cross-task-transferability} One important feature of {\sc Mesa} is its cross-task transferability. As the meta-sampler is trained on task-agnostic meta-data, it is {\it not} task-bounded and is directly applicable to new tasks. This provides {\sc Mesa} with better scalability as one can directly use a pre-trained meta-sampler in new tasks thus greatly reduce the meta-training cost. To validate this, we use {\it Mammography} and {\it Protein Homo.} as two larger and highly-imbalanced meta-test tasks, then consider five meta-training tasks including the original task (baseline), two sub-tasks with {50\%/10\%} of the original training set, and two small tasks {\it Optical Digits} and {\it Spectrometer}. Table~\ref{table:cross-task-results} reports the detailed results. We can observe that the transferred meta-samplers generalize well on meta-test tasks. Scaling down the number of meta-training instances has a minor effect on the obtained meta-sampler, especially when the original task has a sufficient number of training samples (e.g., for {\it Protein Homo.}, reducing the meta-training set to 10\% subset only results in -0.10\%/-0.34\% $\Delta$ when $k$=10/20). Moreover, the meta-sampler that trained on a small task also demonstrates noticeably satisfactory performance (superior to other baselines) on new, larger, and even heterogeneous tasks, which validates the generality of the proposed {\sc Mesa} framework. Please refer to Section~\ref{section:additional-results} for a comprehensive cross/sub-task transferability test and other additional experimental results. \section{Introduction} {\it Class imbalance}, due to the naturally-skewed class distributions, has been widely observed in many real-world applications such as click prediction, fraud detection, and medical diagnosis~\cite{graepel2010ctr,haixiang2017learning-from-imb-review,japkowicz2002systematic-study}. Canonical classification algorithms usually induce the bias, i.e., perform well in terms of global accuracy but poorly on the minority class, in solving class imbalance problems. However, the minority class commonly yields higher interests from both learning and practical perspectives~\cite{he2008overview,he2013overview}. Typical imbalanced learning (IL) algorithms attempt to eliminate the bias through data {\it resampling} \cite{chawla2002smote,han2005borderline-smote,he2008adasyn,laurikkala2001ncr,mani2003nearmiss} or {\it reweighting} \cite{lin2017focalloss,liu2006cost-sensitive-imbalance,shrivastava2016hard-example-mining} in the learning process. More recently, ensemble learning is incorporated to reduce the variance introduced by resampling or reweighting and has achieved satisfactory performance~\cite{krawczyk2016learning}. In practice, however, all these methods have been observed to suffer from three major limitations: (I) unstable performance due to the sensitivity to outliers, (II) poor applicability because of the prerequisite of domain experts to hand-craft the cost matrix, and (III) high cost of computing the distance between instances. Regardless the computational issue, we attribute the unsatisfactory performance of traditional IL methods to the validity of heuristic assumptions made on training data. For instance, some methods~\cite{chawla2003smoteboost,freund1997adaboost,liu2009ee-bc,seiffert2010rusboost} assume instances with higher training errors are more informative for learning. However, misclassification may be caused by outliers, and error reinforcement arises in this case with the above assumption. Another widely used assumption is that generating synthetic samples around minority instances helps with learning~\cite{chawla2003smoteboost,chen2010ramoboost,wang2009smotebagging}. This assumption only holds when the minority data is well clustered and sufficiently discriminative. If the training data is extremely imbalanced or with many corrupted labels, the minority class would be poorly represented and lack a clear structure. In this case, working under this assumption severely jeopardizes the performance. Henceforth, it is much more desired to develop an adaptive IL framework that is capable of handling complex real-world tasks without intuitive assumptions. Inspired by the recent developments in meta-learning~\cite{lake2015meta-learning}, we propose to achieve the meta-learning mechanism in ensemble imbalanced learning (EIL) framework. In fact, some preliminary efforts~\cite{peng2019trainable-under-sampling,ren2018learning-to-reweight,han2018meta-weight-net} have investigated the potential of applying meta-learning to IL problems. Nonetheless, these works have limited capability of generalization because of the model-dependent optimization process. Their meta-learners are confined to be co-optimized with a single DNN, which greatly limits their application to other learning models (e.g., tree-based models) as well as deployment into the more powerful EIL framework. In this paper, we propose a generic EIL framework {\sc Mesa} that automatically learns its strategy, i.e., the meta-sampler, from data towards optimizing imbalanced classification. The main idea is to model a meta-sampler that serves as an adaptive under-sampling solution embedded in the iterative ensemble training process. In each iteration, it takes the current state of ensemble training (i.e., the classification error distribution on both the training and validation sets) as its input. Based on this, the meta-sampler selects a subset to train a new base classifier and then adds it to the ensemble, a new state can thus be obtained. We expect the meta-sampler to maximize the final generalization performance by learning from such interactions. To this end, we use reinforcement learning (RL) to solve the non-differentiable optimization problem of the meta-sampler. To summarize, this paper makes the following contributions. (I) We propose {\sc Mesa}, a generic EIL framework that demonstrates superior performance by automatically learning an adaptive under-sampling strategy from data. (II) We carry out a preliminary exploration of extracting and using cross-task meta-information in EIL systems. The usage of such meta-information gives the meta-sampler cross-task transferability. A pretrained meta-sampler can be directly applied to new tasks, thereby greatly reducing the computational cost brought about by meta-training. (III) Unlike prevailing methods whose meta-learners were designed to be co-optimized with a specific learning model (i.e, DNN) during training, we decoupled the model-training and meta-training process in {\sc Mesa}. This makes our framework generally applicable to most of the statistical and non-statistical learning models (e.g., decision tree, Na\"ive Bayes, k-nearest neighbor classifier). \section{The proposed \scshape{Mesa} framework} \label{section:meta-sampler} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/mesa-framework-with-sampler.pdf} \caption{Overview of the proposed {\sc Mesa} Framework. Best viewed in color.} \label{fig:overview} \end{figure} In order to take advantage of both ensemble learning and meta-learning, we propose a novel EIL framework named {\sc Mesa} that works with a meta-sampler. As shown in Fig.~\ref{fig:overview}, {\sc Mesa} consists of three parts: {\it meta-sampling} as well as {\it ensemble training} to build ensemble classifiers, and {\it meta-training} to optimize the meta-sampler. We will describe them respectively in this section. Specifically, {\sc Mesa} is designed to: (I) perform resampling based on meta-information to further boost the performance of ensemble classifiers; (II) decouple model-training and meta-training for general applicability to different classifiers; (III) train the meta-sampler over task-agnostic meta-data for cross-task transferability and reducing meta-training cost on new tasks. {\bf Notations.} Let $\mathcal{X}:\mathbb{R}^d$ be the input feature space and $\mathcal{Y}:\{0,1\}$ be the label space. An instance is represented by $(x,y)$, where $x \in \mathcal{X}$, $y \in \mathcal{Y}$. Without loss of generality, we always assume that the minority class is positive. Given an imbalanced dataset $\mathcal{D}: \{(x_1,y_1), (x_2,y_2), \cdots, (x_n,y_n)\}$, the minority set is $\mathcal{P}: \{(x, y)\ |\ y = 1, (x, y) \in \mathcal{D}\}$ and the majority set is $\mathcal{N}: \{(x, y)\ |\ y = 0, (x, y) \in \mathcal{D}\}$. For highly imbalanced data we have $|\mathcal{N}| \gg |\mathcal{P}|$. We use $f: x \to [0, 1]$ to denote a single classifier and $F_k: x \to [0, 1]$ to denote an ensemble classifier that is formed by $k$ base classifiers. We use $\mathcal{D}_{\tau}$ and $\mathcal{D}_{v}$ to represent the training set and validation set, respectively. {\bf Meta-state.} As mentioned before, we expect to find a task-agnostic representation that can provide the meta-sampler with the information of the ensemble training process. Motivated by the concept of ``gradient/hardness distribution'' from~\cite{li2019gradient-harmonize,liu2019self-paced-ensemble}, we introduce the histogram distribution of the training and validation errors as the meta-state of the ensemble training system. Formally, given an data instance $(x,y)$ and an ensemble classifier $F_{t}(\cdot)$, the classification error $e$ is defined as the absolute difference between the predicted probability of $x$ being positive and the ground truth label $y$, i.e., $|F_{t}(x)-y|$. Suppose the error distribution on dataset $\mathcal{D}$ is $E_{\mathcal{D}}$, then the error distribution approximated by histogram is given by a vector $\widehat{E}_{\mathcal{D}} \in \mathbb{R}^b$, where $b$ is the number of bins in the histogram. Specifically, the $i$-th component of vector $\widehat{E}_{\mathcal{D}}$ can be computed as follows\footnote{To avoid confusion, in Eq.~\ref{eq:error-distribution}, we use $|\cdot|$ and $abs(\cdot)$ to denote cardinality and absolute value, respectively.}: \begin{equation} \label{eq:error-distribution} \widehat{E}_{\mathcal{D}}^{i} = \frac{|\{(x,y)\ |\ \frac{i-1}{b} \le abs(F_{t}(x)-y) < \frac{i}{b}\ ,(x,y) \in \mathcal{D} \}|}{|\mathcal{D}|}, 1 \le i \le b. \end{equation} After concatenating the error distribution vectors on training and validation set, we have the meta-state: \begin{equation} \label{eq:meta-state} s = [\widehat{E}_{\mathcal{D}_{\tau}}:\widehat{E}_{\mathcal{D}_{v}}] \in \mathbb{R}^{2b}. \end{equation} Intuitively, the histogram error distribution $\widehat{E}_{\mathcal{D}}$ shows how well the given classifier fits the dataset $\mathcal{D}$. When $b=2$, it reports the accuracy score in $\widehat{E}_{\mathcal{D}}^1$ and misclassification rate in $\widehat{E}_{\mathcal{D}}^2$ (classification threshold is 0.5). With $b>2$, it shows the distribution of ``easy'' samples (with errors close to 0) and ``hard'' samples (with errors close to 1) in finer granularity, thus contains more information to guide the resampling process. Moreover, since we consider both the training and validation set, the meta-state also provides the meta-sampler with information about bias/variance of the current ensemble model and thus supporting its decision. We show some illustrative examples in Fig.~\ref{fig:meta-state}. \begin{figure}[t] \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{figures/meta-state.pdf} \end{minipage} \hspace{\fill} \begin{minipage}{0.51\linewidth} \vspace{5pt} \caption{ Some examples of different meta-states ($s = [\widehat{E}_{\mathcal{D}_{\tau}}:\widehat{E}_{\mathcal{D}_{v}}]$) and their corresponding ensemble training states. The meta-state reflects how well the current classifier fits on the training set, and how well it generalizes to unseen validation data. Note that such representation is independent of properties of the specific task (e.g., dataset size, feature space) thus can be used to support the meta-sampler to perform adaptive resampling across different tasks. } \label{fig:meta-state} \end{minipage} \end{figure} {\bf Meta Sampling.} Making instance-level decisions by using a complex meta-sampler (e.g., set a large output layer or use recurrent neural network) is extremely time-consuming as the complexity of a single update $C_{u}$ is $\mathcal{O}(|\mathcal{D}|)$. Besides, complex model architecture also brings extra memory cost and hardship in optimization. To make {\sc Mesa} more concise and efficient, we use a Gaussian function trick to simplify the meta-sampling process and the sampler itself, reducing $C_{u}$ from $\mathcal{O}(|\mathcal{D}|)$ to $\mathcal{O}(1)$. Specifically, let $\Im$ denote the meta-sampler, it outputs a scalar $\mu \in [0,1]$ based on the input meta-state $s$, i.e., $\mu \thicksim \Im(\mu|s)$. We then apply a Gaussian function $g_{\mu,\sigma}(x)$ over each instance's classification error to decide its (unnormalized) sampling weight, where $g_{\mu,\sigma}(x)$ is defined as: \begin{equation} \label{eq:gaussian-function} g_{\mu,\sigma}(x) = \frac{1}{\sigma \sqrt{2 \pi}}e^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}. \end{equation} Note that in Eq.~\ref{eq:gaussian-function}, $e$ is the Euler's number, $\mu \in [0,1]$ is given by the meta-sampler and $\sigma$ is a hyper-parameter. Please refer to Section~\ref{section:hyper-parameter} for discussions and guidelines about our hyper-parameter setting. The above meta-sampling procedure $\texttt{Sample}(~\cdot~; F, \mu, \sigma)$ is summarized in Algorithm~\ref{alg:meta-sampling}. \begin{table} \begin{minipage}{0.48\linewidth} \begin{algorithm}[H] \caption{$\text{\tt Sample}(\mathcal{D}_{\tau}; F, \mu, \sigma)$} \label{alg:meta-sampling} \begin{algorithmic}[1] \Require $\mathcal{D}_{\tau}$, $F$, $\mu$, $\sigma$ \State Initialization: derive majority set $\mathcal{P}_{\tau}$ and minority set $\mathcal{N}_{\tau}$ from $\mathcal{D}_{\tau}$\; \State Assign each $(x_i,y_i)$ in $\mathcal{N}_{\tau}$ with weight: $$w_i = \frac{g_{\mu,\sigma}(|F(x_i)-y_i|)}{\sum_{(x_j, y_j) \in \mathcal{N}_{\tau}} g_{\mu,\sigma}(|F(x_j)-y_j|)}$$ \State Sample majority subset $\mathcal{N}_{\tau}^{'}$ from $\mathcal{N}_{\tau}$ w.r.t. sampling weights $w$, where $|\mathcal{N}_{\tau}^{'}| = |\mathcal{P}_{\tau}|$\; \State \Return balanced subset $\mathcal{D}_{\tau}^{'} = \mathcal{N}_{\tau}^{'} \cup \mathcal{P}_{\tau}$ \end{algorithmic} \end{algorithm} \end{minipage} \hspace{10pt} \begin{minipage}{0.48\linewidth} \begin{algorithm}[H] \caption{Ensemble training in {\sc Mesa}} \label{alg:ensemble-training} \begin{algorithmic}[1] \Require $\mathcal{D}_{\tau}$, $\mathcal{D}_{v}$, $\Im$, $\sigma$, $f$, $b$, $k$ \State train $f_1(x)$ with random balanced subset\; \For{$t$=1 to $k-1$} \State $F_{t}(x) = \frac{1}{t}\sum_{i=1}^{t} f_i(x)$\; \State compute $\widehat{E}_{\mathcal{D}_{\tau}}$ and $\widehat{E}_{\mathcal{D}_{v}}$ by Eq.~\ref{eq:error-distribution}\; \State $s_t = [\widehat{E}_{\mathcal{D}_{\tau}}:\widehat{E}_{\mathcal{D}_{v}}]$\; \State $\mu_t \thicksim \Im(\mu_t|s_t)$\; \State $\mathcal{D}^{'}_{t\text{+1},\tau} = \text{\tt Sample}(\mathcal{D}_{\tau}; F_{t}, \mu_t, \sigma)$\; \State train new classifier $f_{t\text{+1}}(x)$ with $\mathcal{D}^{'}_{t\text{+1},\tau}$\; \EndFor \State \Return {$F_k(x) = \frac{1}{k}\sum_{i=1}^k f_i(x)$} \end{algorithmic} \end{algorithm} \end{minipage} \end{table} {\bf Ensemble Training.} Given a meta-sampler $\Im: \mathbb{R}^{2b} \to [0,1]$ and the meta-sampling strategy, we can iteratively train new base classifiers using the dataset sampled by the sampler. At the $t$-th iteration, having the current ensemble $F_{t}(\cdot)$, we can obtain $\widehat{E}_{\mathcal{D}_{\tau}}$, $\widehat{E}_{\mathcal{D}_{v}}$ and meta-state $s_t$ by applying Eqs. (\ref{eq:error-distribution}) and (\ref{eq:meta-state}). Then a new base classifier $f_{t+1}(\cdot)$ is trained with the subset $\mathcal{D}^{'}_{t+1,\tau} = \texttt{Sample}(\mathcal{D}_{\tau}; F_{t}, \mu_t, \sigma)$, where $\mu_t \thicksim \Im(\mu_t|s_t)$ and $\mathcal{D}_{\tau}$ is the original training set. Note that $f_1(\cdot)$ was trained on a random balanced subset, as there is no trained classifier in the first iteration. See Algorithm~\ref{alg:ensemble-training} for more details. \begin{algorithm}[t] \caption{Meta-training in {\sc Mesa}} \label{alg:meta-training} \begin{algorithmic}[1] \State Initialization: replay memory $\mathcal{M}$ with capacity $N$, network parameters $\psi, \bar{\psi}, \theta,$ and $\varphi$ \For {episode = 1 to $M$} \For{each environment step $t$} \State observe $s_t$ from $\mathsf{ENV}$ \Comment line3-5 in Alg.~\ref{alg:ensemble-training}\; \State take action $\mu_t \thicksim \Im_\varphi(\mu_t|s_t)$ \Comment line6-8 in Alg.~\ref{alg:ensemble-training}\; \State observe reward $r_t = P(F_{t\text{+1}}, \mathcal{D}_v) - P(F_{t}, \mathcal{D}_v)$ and $s_{t+1}$ \State store transition $\mathcal{M} = \mathcal{M} \cup \{(s_t, \mu_t, r_t, s_{t+1})\}$ \EndFor \For{each gradient step} \State update $\psi, \bar{\psi}, \theta,$ and $\varphi$ according to~\cite{haarnoja2018soft-actor-critic} \EndFor \EndFor \State \Return {meta-sampler $\Im$ with parameters $\varphi$} \end{algorithmic} \end{algorithm} {\bf Meta Training.} As described above, our meta-sampler $\Im$ is trained to optimize the generalized performance of an ensemble classifier by iteratively selecting its training data. It takes the current state $s$ of the training system as input, and then outputs the parameter $\mu$ of a Gaussian function to decide each instance's sampling probability. The meta-sampler is expected to learn and adapt its strategy from such state($s$)-action($\mu$)-state(new $s$) interactions. The non-differentiable optimization problem of training $\Im$ can thus be naturally approached via reinforcement learning (RL). We consider the ensemble training system as the environment ($\mathsf{ENV}$) in the RL setting. The corresponding Markov decision process (MDP) is defined by the tuple ($\mathcal{S}, \mathcal{A}, p, r$), where the state space $\mathcal{S}:\mathbb{R}^{2b}$ and action space $\mathcal{A}:[0,1]$ is continuous, and the unknown state transition probability $p: \mathcal{S \times S \times A} \to [0, \infty)$ represents the probability density of the next state $s_{t+1} \in \mathcal{S}$ given the current state $s_t \in \mathcal{S}$ and action $a_t \in \mathcal{A}$. More specifically, in each episode, we iteratively train $k$ base classifiers $f(\cdot)$ and form a cascade ensemble classifier $F_k(\cdot)$. In each environment step, $\mathsf{ENV}$ provides the meta-state $s_t = [\widehat{E}_{\mathcal{D}_{\tau}}:\widehat{E}_{\mathcal{D}_{v}}]$, and then the action $a_t$ is selected by $a_t \thicksim \Im(\mu_t|s_t)$, i.e., $a_t \Leftrightarrow \mu_t$. A new base classifier $f_{t+1}(\cdot)$ is trained using the subset $\mathcal{D}^{'}_{t+1,\tau} = Sample(\mathcal{D}_{\tau}; F_{t}, a_t, \sigma)$. After adding $f_{t+1}(\cdot)$ into the ensemble, the new state $s_{t+1}$ was sampled w.r.t. $s_{t+1} \thicksim p(s_{t+1}; s_t, a_t)$. Given a performance metric function $P(F, \mathcal{D}) \to \mathbb{R}$, the reward $r$ is set to the generalization performance difference of $F$ before and after an update (using the keep-out validation set for unbiased estimation), i.e., $r_t = P(F_{t+1}, \mathcal{D}_v) - P(F_t, \mathcal{D}_v)$. The optimization goal of the meta-sampler (i.e., the cumulative reward) is thus the generalization performance of the ensemble classifier. We take advantage of Soft Actor-Critic~\cite{haarnoja2018soft-actor-critic} ({\sc Sac}), an off-policy actor-critic deep RL algorithm based on the maximum entropy RL framework, to optimize our meta-sampler $\Im$. In our case, we consider a parameterized state value function $V_\psi(s_t)$ and its corresponding target network $V_{\bar{\psi}}(s_t)$, a soft Q-function $Q_\theta(s_t, a_t)$, and a tractable policy (meta-sampler) $\Im_\varphi(a_t|s_t)$. The parameters of these networks are $\psi, \bar{\psi}, \theta,$ and $\varphi$. The rules for updating these parameters are given in the {\sc Sac} paper~\cite{haarnoja2018soft-actor-critic}. We summarize the meta-training process of $\Im_\varphi$ in Algorithm~\ref{alg:meta-training}. {\bf Complexity analysis.} Please refer to Section~\ref{section:complexity-analysis} for detailed complexity analysis of {\sc Mesa} alongside with related validating experiments in Fig.~\ref{fig:subtask-meta-training-cost}. \section{Related Work} \label{section:background} Fernández et al.~\cite{albert02018experiment}, Guo et al.~\cite{haixiang2017learning-from-imb-review}, and He et al.~\cite{he2008overview,he2013overview} provided systematic reviews of algorithms and applications of imbalanced learning. In this paper, we focus on \emph{binary imbalanced classification} problem, which is one of the most widely studied problem setting~\cite{haixiang2017learning-from-imb-review,krawczyk2016learning} in imbalanced learning. Such a problem extensively exists in practical applications, e.g., fraud detection (fraud vs. normal), medical diagnosis (sick vs. healthy), and cybersecurity (intrusion vs. user connection). We mainly review existing works on this problem as follows. {\bf Resampling} Resampling methods focus on modifying the training set to balance the class distribution (i.e., over/under-sampling~\cite{chawla2002smote,han2005borderline-smote,he2008adasyn,mani2003nearmiss,smith2014instance-complexity}) or filter noise (i.e., cleaning resampling~\cite{laurikkala2001ncr,tomek1976tomeklink}). Random resampling usually leads to severe information loss or overfishing, hence many advanced methods explore distance information to guide their sampling process~\cite{haixiang2017learning-from-imb-review}. However, calculating the distance between instances is computationally expensive on large-scale datasets, and such strategies may even fail to work when the data does not fit their assumptions. {\bf Reweighting} Reweighting methods assign different weights to different instances to alleviate a classifier's bias towards majority groups (e.g.,~\cite{chai2004csnb,freund1997adaboost,ling2004csdt,liu2006cost-sensitive-imbalance}). Many recent reweighting methods such as FocalLoss~\cite{lin2017focalloss} and GHM~\cite{li2019gradient-harmonize} are specifically designed for DNN loss function engineering. Class-level reweighting such as cost-sensitive learning~\cite{liu2006cost-sensitive-imbalance} is more versatile but requires a cost matrix given by domain experts beforehand, which is usually infeasible in practice. {\bf Ensemble Methods.} Ensemble imbalanced learning (EIL) is known to effectively improve typical IL solutions by combining the outputs of multiple classifiers (e.g.,~\cite{chawla2003smoteboost,liu2009ee-bc,liu2019self-paced-ensemble,seiffert2010rusboost,wang2009smotebagging}). These EIL approaches prove to be highly competitive~\cite{krawczyk2016learning} and thus gain increasing popularity~\cite{haixiang2017learning-from-imb-review} in IL. However, most of the them are straight combinations of a resampling/reweighting solution and an ensemble learning framework, e.g., {\sc Smote~\cite{chawla2002smote}+AdaBoost~\cite{freund1997adaboost}=SmoteBoost~\cite{chawla2003smoteboost}}. Consequently, although EIL techniques effectively lower the variance introduced by resampling/reweighting, these methods still suffer from unsatisfactory performance due to their heuristic-based designs. {\bf Meta-learning Methods.} Inspired by recent meta-learning developments~\cite{finn2017model-agnostic-meta-learning,lake2015meta-learning}, there are some studies that adapt meta-learning to solve IL problem. Typical methods include Learning to Teach~\cite{wu2018learning2teach} that learns a dynamic loss function, MentorNet~\cite{jiang2017mentornet} that learns a mini-batch curriculum, and L2RW~\cite{ren2018learning-to-reweight}/Meta-Weight-Net~\cite{han2018meta-weight-net} that learn an implicit/explicit data weighting function. Nonetheless, all these methods are confined to be co-optimized with a DNN by gradient descent. As the success of deep learning relies on the massive training data, mainly from the well-structured data domain like computer vision and natural language processing, the applications of these methods to other learning models (e.g., tree-based models and their ensemble variants like gradient boosting machine) in traditional classification tasks (e.g., small/unstructured/tabular data) are highly constrained. We present a comprehensive comparison of existing IL solutions for binary imbalanced classification problem with our {\sc Mesa} in Table~\ref{table:comparison}. Compared with other methods, {\sc Mesa} aims to learn a resampling strategy directly from data. It is able to perform quick and adaptive resampling as no distance computing, domain knowledge, or related heuristics are involved in the resampling process. \newcommand{\ding{51}}{\ding{51}} \newcommand{\ding{55}}{\ding{55}} \newcommand{\textcolor{black}{\ding{51}}{\textcolor{black}{\kern-0.65em\ding{55}}}}{\textcolor{black}{\ding{51}}{\textcolor{black}{\kern-0.65em\ding{55}}}} \begin{table*}[t] \centering \tiny \caption{ Comparisons of {\sc Mesa} with existing imbalanced learning methods, note that $\mathcal{|N|} \gg \mathcal{|P|}$. } \label{table:comparison} \begin{threeparttable} \resizebox{\linewidth}{!}{ \begin{tabular}{c|c|ccccc} \toprule \multirow{2}*{Category\tnote{*}} & \multirow{2}*{Representative(s)} & Sample & Distance-based & Domain kno- & Robust to noi-& \multirow{2}*{Requirements}\\ & & efficiency & resampling cost & wledge free? & ses/outliers? & \\ \midrule RW & \cite{ling2004csdt}, \cite{chai2004csnb} & $\mathcal{O(|P|+|N|)}$ & \ding{55} & \ding{55} & \textcolor{black}{\ding{51}}{\textcolor{black}{\kern-0.65em\ding{55}}} & cost matrix set by domain experts\\ US & \cite{mani2003nearmiss}, \cite{smith2014instance-complexity} & $\mathcal{O}(2|\mathcal{P}|)$ & $\mathcal{O}(|\mathcal{P}|)$ & \ding{51} & \ding{55} & well-defined distance metric \\ OS & \cite{chawla2002smote}, \cite{he2008adasyn} & $\mathcal{O}(2|\mathcal{N}|)$ & $\mathcal{O}(|\mathcal{P}|)$ & \ding{51} & \ding{55} & well-defined distance metric \\ CS & \cite{wilson1972enn}, \cite{tomek1976allknn}& $\mathcal{O(|P|+|N|)}$ & $\mathcal{O}(|\mathcal{P}|\cdot|\mathcal{N}|)$ & \ding{51} & \ding{51} & well-defined distance metric \\ OS+CS & \cite{batista2004smoteenn}, \cite{batista2003smotetomek} & $\mathcal{O}(2|\mathcal{N}|)$ & $\mathcal{O}(|\mathcal{P}|\cdot|\mathcal{N}|)$ & \ding{51} & \ding{51} & well-defined distance metric \\ \midrule IE+RW & \cite{freund1997adaboost}, \cite{sun2007cost-boost} & $\mathcal{O}(k(\mathcal{|P|+|N|}))$ & \ding{55} & \ding{55} & \ding{55} & cost matrix set by domain experts\\ PE+US & \cite{barandela2003underbagging}, \cite{liu2009ee-bc} & $\mathcal{O}(2k|\mathcal{P}|)$ & \ding{55} & \ding{51} & \ding{51} & - \\ PE+OS & \cite{wang2009smotebagging} & $\mathcal{O}(2k|\mathcal{N}|)$ & $\mathcal{O}(2k|\mathcal{P}|)$ & \ding{51} & \ding{51} & well-defined distance metric \\ IE+RW+US & \cite{seiffert2010rusboost} & $\mathcal{O}(2k|\mathcal{P}|)$ & \ding{55} & \ding{51} & \ding{55} & - \\ IE+RW+OS & \cite{chawla2003smoteboost} & $\mathcal{O}(2k|\mathcal{N}|)$ & $\mathcal{O}(2k|\mathcal{P}|)$ & \ding{51} & \ding{55} & well-defined distance metric\\ \midrule ML & \cite{han2018meta-weight-net}, \cite{ren2018learning-to-reweight}, \cite{wu2018learning2teach} & $\mathcal{O(|P|+|N|)}$ & \ding{55} & \textcolor{black}{\ding{51}}{\textcolor{black}{\kern-0.65em\ding{55}}} & \ding{51} & co-optimized with DNN only \\ IE+ML & MESA(ours) & $\mathcal{O}(2k|\mathcal{P}|)$ & \ding{55} & \ding{51} & \ding{51} & independent meta-training \\ \bottomrule \end{tabular}} \begin{tablenotes} \tiny \item[*] reweighting (RW), under-sampling (US), over-sampling (OS), cleaning-sampling (CS), iterative ensemble (IE), parallel ensemble (PE), meta-learning (ML). \end{tablenotes} \end{threeparttable} \end{table*} \section{Additional Results} \label{section:additional-results} \subsection{Cross-task and sub-task transferability of the meta-sampler} \label{section:additional-results-transfer} \begin{figure}[h] \centering \subfigure[Cross-task transfer performance.]{ \centering \includegraphics[width=0.48\linewidth]{figures/transfer-cross-task-performance.pdf} \label{fig:transfer-cross-task-performance} } \subfigure[Cross-task transfer performance loss.]{ \centering \includegraphics[width=0.48\linewidth]{figures/transfer-cross-task-performance-loss.pdf} \label{fig:transfer-cross-task-performance-loss} } \subfigure[Sub-task transfer performance.]{ \centering \includegraphics[width=0.48\linewidth]{figures/transfer-sub-task-performance.pdf} \label{fig:transfer-sub-task-performance} } \subfigure[Sub-task transfer performance loss.]{ \centering \includegraphics[width=0.48\linewidth]{figures/transfer-sub-task-performance-loss.pdf} \label{fig:transfer-sub-task-performance-loss} } \caption{Cross/Sub-task transfer performance loss of {\sc Mesa}.} \label{fig:transfer-heatmap} \end{figure} In addition to results reported in Table~\ref{table:cross-task-results}, we conduct further experiments on all five tasks to test the cross-task transferability of the meta-sampler. The results are presented in Fig.~\ref{fig:transfer-heatmap} (with $k$=10). For the cross-task transfer experiment, we meta-train the meta-sampler on each task separately, then apply it on other unseen meta-test tasks. As shown in Fig.~\ref{fig:transfer-cross-task-performance-loss}, in all 20 heterogenous training-test task pairs, 18/20 of them manage to have less than 1\% performance loss. On the other hand, in the sub-task transfer experiment, for each task, we meta-train the meta-sampler on 100\%/50\%/25\%/10\%/5\% subset, then apply it back to the original full dataset. Again {\sc Mesa} shows robust performance, in all 20 subset transfer experiments, 17/20 of them manage to have less than 1\% performance loss. The effect of reducing the meta-training set scale is more significant in small datasets. The largest performance loss (-1.64\%) is reported in \{5\%, {\it Spectrometer}\} setting, which is the smallest dataset with only 531 instances. For large datasets, scaling down the meta-training set greatly reduces the number of instances as well as meta-training costs, while only brought about minor performance loss, e.g., -0.23\% loss in \{5\%, {\it Protein Homo.}\}. \subsection{Robustness to corrupted labels.} In practice, the collected training dataset may contain corrupted labels. Typical examples include data labeled by crowdsourcing systems or search engines~\cite{hendrycks2018using-trusted-data-noisy-labels,li2017learning-from-noisy-labels}. The negative impact brought by noise is particularly prominent on skewed datasets that inherently have an unclear minority data structure. In this experiment, {\it Mammography} and {\it Protein Homo.} tasks are used to test the robustness of different EIL methods on highly-imbalanced datasets. We simulate real-world corrupted labels by introducing flip noise. Specifically, flip the labels of $r_{\text{noise}}$\% (i.e., $|\mathcal{P}| \cdot r_{\text{noise}}$) minority samples in the training set from 1 to 0. Accordingly, an equal number of majority samples are flipped from 0 to 1. We thereby get a noisy dataset with the same IR. For each dataset, we test the USB-EIL methods with $k=10$ trained on the 0\%/10\%/25\%/40\%\ noisy training sets. The results are summarized in Table~\ref{table:label-noise}, which shows that {\sc Mesa} consistently outperforms other baselines under different levels of label noise. The meta-sampler $\Im$ in {\sc Mesa} can efficiently prevent the ensemble classifier from overfitting noise as it is optimized for generalized performance, while the performance of other methods decrease rapidly as the noise level increases. Compared with the second-best baselines, {\sc Mesa} achieves 12.00\%/14.44\%/10.29\%/7.22\% ({\it Mammography}) and 2.56\%/3.20\%/6.92\%/9.72\% ({\it Protein Homo.}) performance gain when $r_{\text{noise}}$=0\%/10\%/25\%/40\%. \begin{table*}[t] \centering \tiny \caption{Generalized performances on real-world imbalanced datasets with varying label noise ratios.} \label{table:label-noise} \begin{tabular}{c|cccc|cccc} \toprule \multirow{2}*{\diagbox[width=8em]{Method}{Dataset}} & \multicolumn{4}{c|}{Mammography (IR=42, 11,183 instances)} & \multicolumn{4}{c}{Protein Homo. (IR=111, 145,751 instances)} \\ \cline{2-9} & $r_{\text{noise}}$=0\% & $r_{\text{noise}}$=10\% & $r_{\text{noise}}$=25\% & $r_{\text{noise}}$=40\% & $r_{\text{noise}}$=0\% & $r_{\text{noise}}$=10\% & $r_{\text{noise}}$=25\% & $r_{\text{noise}}$=40\% \\ \hline {\sc RusBoost~\cite{seiffert2010rusboost}} & 0.511 & 0.448 & 0.435 & 0.374 & 0.738 & 0.691 & 0.628 & 0.502 \\ {\sc UnderBagging~\cite{barandela2003underbagging}} & 0.401 & 0.401 & 0.375 & 0.324 & 0.632 & 0.629 & 0.629 & 0.617 \\ {\sc SPE~\cite{liu2019self-paced-ensemble}} & 0.559 & 0.476 & 0.405 & 0.345 & 0.819 & 0.775 & 0.688 & 0.580 \\ {\sc Cascade~\cite{liu2009ee-bc}} & 0.575 & 0.540 & 0.447 & 0.357 & 0.805 & 0.781 & 0.708 & 0.594 \\ \hline {\sc Mesa (OURS)} & {\bf 0.644} & {\bf 0.618} & {\bf 0.493} & {\bf 0.401} & {\bf 0.840} & {\bf 0.806} & {\bf 0.757} & {\bf 0.677} \\ \bottomrule \end{tabular} \end{table*} \subsection{Cross-task meta-training} In the meta-training process of {\sc Mesa}, collecting transitions is independent of the updates of the meta-sampler. This enables us to simultaneously collect meta-data from multiple datasets and thus to co-optimize the meta-sampler over these tasks. There may be some states that can rarely be observed in a specific dataset, in such case, parallelly collecting transitions from multiple datasets also helps our meta-sampler exploring the state space and learns a better policy. Moreover, as previously discussed, a converged meta-sampler can be directly applied to new and even heterogeneous tasks. Hence by cross-task meta-training, we can obtain a meta-sampler that not only works well on training tasks but is also able to boost {\sc Mesa}'s performance on unseen (meta-test) tasks. To verify this, we follow the setup in section~\ref{subsubsection:cross-task-transferability} using two small tasks for cross-task meta-training and two large tasks for the meta-test. We plot the generalized performance on all the four tasks during the cross-task meta-training process, as shown in Fig~\ref{fig:meta-training-process}. The performance scores of other representative EIL methods from Table~\ref{table:comparison-under-ensemble} are also included. Note that we only plot the two best performing baselines in each subfigure for better visualization. At the very start of meta-training, the meta-sampler $\Im$ is initialized with random weights. Its performance is relatively poor at this point. But as meta-training progresses, $\Im$ adjusts its sampling strategy to maximize the expected generalized performance. After 50-60 training episodes, {\sc Mesa} surpasses the best performing baseline method and continued to improve. Finally, we get a meta-sampler that is able to undertake adaptive under-sampling and thereby outperform other EIL methods on all meta-training and meta-test tasks. \begin{figure*}[t] \centering \setlength{\fboxrule}{0.5pt} \setlength{\fboxsep}{0pt} \shadowsize = 2pt \subfigure[Performance in meta-training tasks]{ \doublebox{\includegraphics[width=0.471\linewidth]{figures/cross-task-meta-training.pdf}} \label{fig:meta-training-tasks} } \subfigure[Performance in meta-test tasks]{ \doublebox{\includegraphics[width=0.471\linewidth]{figures/cross-task-meta-test.pdf}} \label{fig:meta-test-tasks} } \caption{ Visualization of {\sc Mesa}'s cross-task meta-training process (slide mean window = 50). } \label{fig:meta-training-process} \end{figure*} \subsection{Ablation study} To assess the importance of Gaussian function weighted meta-sampling and meta-sampler respectively, we carry out ablation experiments on 4 real-world datasets. They are Optical Digits, Spectrometer, ISOLET, and Mammography with increasing IR (9.1/11/12/42). Our experiments shown in Table~\ref{table:ablation-test-mesa} indicate that {\sc Mesa} significantly improves performance, especially when using small ensembles on highly imbalanced datasets. \begin{table}[t] \small \centering \caption{ Ablation study of {\sc Mesa} on 4 real-world datasets. Random policy refers to using randomly initialized meta-sampler to perform meta-sampling. $k$ represents the ensemble size. $\Delta$ is the relative performance loss (\%) compared to {\sc Mesa} policy. } \label{table:ablation-test-mesa} \begin{tabular}{c|c|cccccc} \toprule Dataset & Method & $k=5$ & $\Delta$ & $k=10$ & $\Delta$ & $k=20$ & $\Delta$ \\ \midrule \multirow{3}*{Optical Digits} & {\sc Mesa} policy & 0.929 & baseline & 0.968 & baseline & 0.980 & baseline \\ & Random policy & 0.904 & -1.61\% & 0.959 & -0.93\% & 0.975 & -0.51\% \\ & Random sampling & 0.876 & -5.71\% & 0.927 & -4.24\% & 0.954 & -2.65\% \\ \midrule \multirow{3}*{Spectrometer} & {\sc Mesa} policy & 0.723 & baseline & 0.803 & baseline & 0.845 & baseline \\ & Random policy & 0.685 & -5.26\% & 0.774 & -3.61\% & 0.800 & -5.33\% \\ & Random sampling & 0.610 & -15.63\% & 0.692 & -13.82\% & 0.755 & -10.65\%\\ \midrule \multirow{3}*{ISOLET} & {\sc Mesa} policy & 0.787 & baseline & 0.877 & baseline & 0.921 & baseline \\ & Random policy & 0.748 & -4.96\% & 0.849 & -3.19\% & 0.891 & -3.26\%\\ & Random sampling & 0.688 & -12.58\% & 0.768 & -12.43\% & 0.812 & -11.83\%\\ \midrule \multirow{3}*{Mammography} & {\sc Mesa} policy & 0.515 & baseline & 0.644 & baseline & 0.705 & baseline \\ & Random policy & 0.405 & -21.36\% & 0.568 & -11.80\% & 0.662 & -6.10\%\\ & Random sampling & 0.307 & -40.39\% & 0.401 & -37.73\% & 0.483 & -31.49\%\\ \bottomrule \end{tabular} \end{table} \section{Implementation Details} \label{section:implementation-details} \begin{table*}[t] \centering \small \caption{Description of the real-world imbalanced datasets.} \label{table:datasets} \begin{tabular}{c|cc|ccc} \toprule Dataset & Repository & Target & Imbalance Ratio & \#Samples & \#Features\\ \midrule Optical Digits & UCI & target: 8 & 9.1:1 & 5,620 & 64 \\ Spectrometer & UCI & target: $\ge 44$ & 11:1 & 531 & 93 \\ ISOLET & UCI & target: A, B & 12:1 & 7,797 & 617 \\ Mammography & UCI & target: minority & 42:1 & 11,183 & 6 \\ Protein Homo. & KDDCUP 2004 & target: minority & 111:1 & 145,751 & 74 \\ \bottomrule \end{tabular} \end{table*} {\bf Datasets.} All datasets used in this paper are publicly available, and are summarized in Table~\ref{table:datasets}. One can fetch these datasets using the {\tt imblearn.dataset} API\footnote{\tt https://imbalanced-learn.readthedocs.io/en/stable/api.html} of the imbalanced-learn~\cite{guillaume2017imblearn} Python package. For each dataset, we keep-out the 20\% validation set and report the result of 4-fold stratified cross-validation (i.e., 60\%/20\%/20\% train/valid/test split). We also perform class-wise split to ensure that the imbalanced ratio of the training, validation, and test sets after splitting is the same. {\bf Base classifiers.} All used base classifiers (i.e., K-nearest neighbor classifier, Gaussian native bayes, decision tree, adaptive boosting, gradient boosting machine) are implemented using {\tt scikit-learn}~\cite{pedregosa2011sklearn} Python package. For the ensemble models (i.e., adaptive boosting and gradient boosting), we set the {\tt n\_estimators} = 10. All other parameters use the default setting specified by the {\tt scikit-learn} package. {\bf Implementation of baseline methods.} All baseline resampling IL methods ({\sc RandomUS, NearMiss~\cite{mani2003nearmiss}, Clean~\cite{laurikkala2001ncr}, ENN~\cite{wilson1972enn}, TomekLink~\cite{tomek1976tomeklink}, AllKNN~\cite{tomek1976allknn}, OSS~\cite{kubat1997oss}, Smote~\cite{chawla2002smote}, ADASYN~\cite{he2008adasyn}, BorderSmote~\cite{han2005borderline-smote}, SmoteENN~\cite{batista2004smoteenn},} and {\sc SmoteTomek~\cite{batista2003smotetomek}}) are implemented in {\tt imbalanced-learn} Python package~\cite{guillaume2017imblearn}. We directly use their implementation and default hyper-parameters in our experiments. We use open-source code\footnote{\tt https://github.com/dialnd/imbalanced-algorithms}\footnote{\tt https://github.com/ZhiningLiu1998/self-paced-ensemble} for implementation of baseline ensemble imbalanced learning (EIL) methods ({\sc RusBoost~\cite{seiffert2010rusboost}, UnderBagging~\cite{barandela2003underbagging}, Cascade~\cite{liu2009ee-bc}, SPE~\cite{liu2019self-paced-ensemble}, SmoteBoost~\cite{chawla2003smoteboost}, SmoteBagging~\cite{wang2009smotebagging},} and {\sc RamoBoost~\cite{chen2010ramoboost}}). The hyper-parameters of these baseline EIL methods are reported in Table~\ref{table:hyper-parameters-baseline}. {\bf Implementation of {\sc Mesa}.} {\sc Mesa} is implemented with {\tt PyTorch}. The empirical results reported in the paper use hyper-parameters in Tables~\ref{table:hyper-parameters-sac} and~\ref{table:hyper-parameters-mesa} for the meta-training of {\sc Mesa}. We open-sourced our {\sc Mesa} implementation at Github\footnote{{\tt https://github.com/ZhiningLiu1998/mesa}} with a {\it jupyter notebook} file that allows you to quickly (I) conduct a comparative experiment, (II) visualize the meta-training process of {\sc Mesa}, and (III) visualize the experimental results. Please check the repository for more information. \begin{table}[t] \scriptsize \begin{minipage}{0.48\linewidth} \centering \caption{Hyper-parameters of EIL baselines.} \label{table:hyper-parameters-baseline} \begin{tabular}{c|c|c} \toprule Method & Hyper-parameter & Value \\ \midrule \multirow{5}*{\sc RusBoost~\cite{seiffert2010rusboost}} & n\_samples & 100 \\ & min\_ratio & 1.0 \\ & with\_replacement & True \\ & learning\_rate & 1.0 \\ & algorithm & SAMME.R \\ \midrule \multirow{4}*{\sc SmoteBoost~\cite{chawla2003smoteboost}} & n\_samples & 100 \\ & k\_neighbors & 5 \\ & learning\_rate & 1.0 \\ & algorithm & SAMME.R \\ \midrule \multirow{6}*{\sc RamoBoost~\cite{chen2010ramoboost}} & n\_samples & 100 \\ & k\_neighbors\_1 & 5 \\ & k\_neighbors\_2 & 5 \\ & alpha & 0.3 \\ & learning\_rate & 1.0 \\ & algorithm & SAMME.R \\ \midrule {\sc UnderBagging~\cite{barandela2003underbagging}} & - - - & - - - \\ \midrule {\sc SmoteBagging~\cite{wang2009smotebagging}} & k\_neighbors & 5 \\ \midrule {\sc BalanceCascade~\cite{liu2009ee-bc}} & - - - & - - - \\ \midrule \multirow{2}*{\sc SelfPacedEnsemble~\cite{liu2019self-paced-ensemble}} & hardness\_func & cross entropy \\ & k\_bins & 10 \\ \bottomrule \end{tabular} \end{minipage} \hspace{5pt} \begin{minipage}{0.5\linewidth} \centering \small \caption{Hyper-parameters of {\sc SAC~\cite{haarnoja2018soft-actor-critic}}.} \label{table:hyper-parameters-sac} \begin{tabular}{c|c} \toprule Hyper-parameter & Value \\ \midrule Policy type & Gaussian \\ Reward discount factor ($\gamma$) & 0.99 \\ Smoothing coefficient ($\tau$) & 0.01 \\ Temperature parameter ($\alpha$) & 0.1 \\ Learning rate & 1e-3 \\ Learning rate decay steps & 10 \\ Learning rate decay ratio & 0.99 \\ Mini-batch size & 64 \\ Replay memory size & 1e3 \\ Steps of gradient updates & 1e3 \\ Steps of random actions & 5e2 \\ \bottomrule \end{tabular} \vspace{18pt} \centering \caption{Hyper-parameters of {\sc Mesa}.} \label{table:hyper-parameters-mesa} \begin{tabular}{c|c} \toprule Hyper-parameter & Value \\ \midrule Meta-state size & 10 \\ Gaussian function parameter $\sigma$ & 0.2 \\ \bottomrule \end{tabular} \end{minipage} \end{table} The actor policy network of meta-sampler is a multi-layer perceptron with one hidden layer containing 50 nodes. Its architecture is thus \{{\tt state\_size}, 50, 1\}. The corresponding (target) critic Q-network is also an MLP but with two hidden layers. As it takes both state and action as input, its architecture is thus \{{\tt state\_size}+1, 50, 50, 1\}. Each hidden node is with ReLU activation function, and the output of the policy network is with the tanh activation function, to guarantee the output located in the interval of $[0,1]$. As a general network training trick, we employ the Adam optimizer to optimize the policy and critic networks. \begin{table}[t] \footnotesize \centering \caption{Performance of different policy network architectures.} \label{table:policy-network-structures-mesa} \begin{tabular}{c|ccc} \toprule \multirow{2}*{Network Architecture} & \multicolumn{3}{c}{Optical Digits Task} \\ \cline{2-4} & $k$=5 & $k$=10 & $k$=20 \\ \midrule \{10, 50, 1\} & 0.929$\pm$0.015 & 0.968$\pm$0.007 & 0.980$\pm$0.003 \\ \{10, 100, 1\} & 0.930$\pm$0.014 & 0.966$\pm$0.007 & 0.979$\pm$0.004 \\ \{10, 200, 1\} & 0.922$\pm$0.018 & 0.964$\pm$0.008 & 0.978$\pm$0.005 \\ \{10, 25, 25, 1\} & 0.928$\pm$0.014 & 0.966$\pm$0.007 & 0.980$\pm$0.004 \\ \{10, 50, 50, 1\} & 0.929$\pm$0.017 & 0.967$\pm$0.008 & 0.978$\pm$0.004 \\ \{10, 100, 100, 1\} & 0.926$\pm$0.015 & 0.966$\pm$0.010 & 0.979$\pm$0.006 \\ \{10, 10, 10, 10, 1\} & 0.924$\pm$0.013 & 0.964$\pm$0.007 & 0.977$\pm$0.004 \\ \{10, 25, 25, 25, 1\} & 0.924$\pm$0.016 & 0.966$\pm$0.006 & 0.978$\pm$0.002 \\ \{10, 50, 50, 50, 1\} & 0.926$\pm$0.006 & 0.965$\pm$0.006 & 0.979$\pm$0.005 \\ \bottomrule \end{tabular} \end{table} We test different network architecture settings in experiments. Table~\ref{table:policy-network-structures-mesa} depicts some representative results under 9 different policy network structures, with different depths and widths. It can be observed that varying MLP settings have no substantial effects on the final result. We hence prefer to use the simple and shallow one. \section{Discussion} \subsection{Complexity analysis of the proposed framework} \label{section:complexity-analysis} Our {\sc Mesa} framework can be roughly regarded as an under-sampling-based ensemble imbalanced learning (EIL) framework (Algorithm~\ref{alg:ensemble-training}) with an additional sampler meta-training process (Algorithm~\ref{alg:meta-training}). {\bf Ensemble training.} Given an imbalanced dataset $\mathcal{D}$ with majority set $\mathcal{N}$ and minority set $\mathcal{P}$, where $|\mathcal{N}| \gg |\mathcal{P}|$. Suppose that the cost of training a base classifier $f(\cdot)$ with $N$ training instances is $C_{f\text{train}}(N)$. As {\sc Mesa} performs strictly balanced under-sampling to train each classifier, we have \[ \text{Cost of $k$-classifier ensemble training}: k \cdot C_{f\text{train}}(2|\mathcal{P}|) \] In comparison, the cost is $k \cdot C_{f\text{train}}(|\mathcal{N}|+|\mathcal{P}|)$ for reweighting-based EIL methods (e.g., {\sc AdaBoost}) and around $k \cdot C_{f\text{train}}(2|\mathcal{N}|)$ for over-sampling-based EIL methods (e.g., {\sc SmoteBagging}). {\bf Meta-training.} Let's denote the cost of performing a single gradient update step of the meta-sampler $\Im$ as $C_{\Im \text{update}}$, this cost mainly depends on the choice of the policy/critic network architecture. It is barely influenced by other factors such as the dataset size in ensemble training. In our {\sc Mesa} implementation, we do $n_\text{random}$ steps for collecting transitions with random actions before start updating $\Im$, and $n_\text{update}$ steps for collecting online transitions and perform gradient updates to $\Im$. Then we have \[ \text{Cost of meta-training}: (n_\text{random}+n_\text{update}) \cdot C_{f\text{train}}(2|\mathcal{P}|) + n_\text{update} \cdot C_{\Im \text{update}} \] As mentioned before, the meta-training cost can be effectively reduced by scaling down the meta-training dataset (i.e., reducing $|\mathcal{P}|$). This can be achieved by using a subset of the original data in meta-training. One can also directly use a meta-sampler pre-trained on other (smaller) dataset to avoid the meta-training phase when applying {\sc Mesa} to new tasks. Both ways should only bring minor performance loss, as reported in Fig.~\ref{fig:transfer-heatmap}. Note that, reducing the number of meta-training instances only influences the $C_{f\text{train}}(\cdot)$ term. Therefore, the larger the $C_{f\text{train}}(2|\mathcal{P}|)/C_{\Im \text{update}}$, the higher the acceleration ratio brought by shrinking the meta-training set. We also show some results in Fig.~\ref{fig:subtask-meta-training-cost} to demonstrate such influence. The decision tree classifier we used has no max depth limitation, thus its training cost is higher when dealing with high-dimensional data. We thus choose three tasks with different numbers of features for the test, they are {\it Mammography}/{\it Protein Homo.}/{\it ISOLET} with 6/74/617 features. It can be observed that the acceleration effect is slightly weaker for the low-dimensional {\it Mammography} task, as the cost of training base classifier is small compared with the cost of updating meta-sampler. On the other hand, for those high-dimensional tasks (i.e., {\it ISOLET} and {\it Protein Homo.}), shrinking the meta-training set greatly reduces the cost of meta-sampler training as we expect. \begin{figure}[t] \centering \subfigure[Sub-task transfer performance.]{ \centering \includegraphics[width=0.48\linewidth]{figures/subtask-meta-training-performance.pdf} } \subfigure[Sub-task meta-training time.]{ \centering \includegraphics[width=0.48\linewidth]{figures/subtask-meta-training-time-log.pdf} } \caption{The influence of scaling down the meta-training set.} \label{fig:subtask-meta-training-cost} \end{figure} \subsection{Guideline of selecting {\sc Mesa} hyper-parameters} \label{section:hyper-parameter} The meta-state size $b$ determines how detailed our error distribution approximation is (i.e., the number of bins in the histogram). Thus setting a small meta-state size may lead to poor performance. Increasing it to a large value (e.g., $\ge$20) brings greater computational cost but only trivial performance increment. We recommend setting the meta state size to be 10. One can try a bigger meta-state when working on larger datasets. The Gaussian function parameter $\sigma$ determines how to execute meta-sampling in {\sc Mesa}. Specifically, given an action $\mu$, we expect the meta-sampling selects those instances with error values close to $\mu$. Besides, the meta-sampling is also responsible for providing diversity, which is an important characteristic in classifiers combination. A small $\sigma$ can guarantee to select examples with small errors around $\mu$, but this would result in subsets that lack diversity. For example, in the late iterations of ensemble training, most of the data instances have stable error values, and meta-sampling with small $\sigma$ will always return the same training set for a specific $\mu$. This is detrimental to further improve the ensemble classifier. Setting a large $\sigma$ will ``flatten '' the Gaussian function, more instances with different errors are likely to be selected and thus bring more diversity. However, when $\sigma \to \infty$, the meta-sampling turns into uniform random under-sampling that makes no sense for meta-training. We also note that although one can expand the policy to automatically determine $\sigma$, it requires additional computational cost and the benefit is very limited. More importantly, selecting inappropriate $\sigma$ will interfere with the quality of collected transitions, causing an unstable meta-training process. Therefore, we suggest using $\sigma=0.2$ to balance between these factors. \section{Visualization} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figures/actions-under-noises.pdf} \caption{Learned meta-sampler policies on {\it Mammography} dataset with varying label noise ratios.} \label{fig:action-under-noises} \end{figure} \subsection{Visualization of learned meta-sampler policy} We visualize the learned meta-sampler policy under different levels of noises in Fig.~\ref{fig:action-under-noises}. It clearly shows that the sampling strategy becomes more conservative as the noise ratio grows. At the very start of ensemble training, there are only a few base learners and thus the ensemble classifier underfits the training set. At this point, the meta-sampler tends to select training instances with larger errors, hence accelerating the fitting process. It continues to use such a strategy on datasets with no/few noises. However, on highly noisy datasets (e.g., with label noise ratio $\ge$ 40\%), the meta-sampler prefers to select training instances with relatively lower errors in later iterations as the hard-to-classify instances are likely to be noises/outliers. This effectively prevents the ensemble classifier from overfitting noisy data points. \subsection{Visualization of meta-training process} We visualize the meta-training process in Fig.~\ref{fig:visualization-meta-training-process}. As the meta-training progress, the classification performance shows consistent improvement in training, validation, and test set in all tasks. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figures/meta_train_optical.pdf} \includegraphics[width=1.0\linewidth]{figures/meta_train_spectrometer.pdf} \includegraphics[width=1.0\linewidth]{figures/meta_train_ISOLET.pdf} \includegraphics[width=1.0\linewidth]{figures/meta_train_mammo.pdf} \includegraphics[width=1.0\linewidth]{figures/meta_train_protein.pdf} \caption{Train/Validation/Test performance during meta-training process (slide mean window=50).} \label{fig:visualization-meta-training-process} \end{figure}
proofpile-arXiv_059-15757
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} \large \par The theory of strong interactions states about the prediction of quark-hadron phase transition under the condition of extreme high nuclear density and very high temperature. In the transition phenomena, matter consists of free quarks and gluons called quark-gluon plasma (QGP) turns into a matter of confined phase of bound quarks of hadrons~\cite{kar,at,sat,bus}. The system is believed to exist for a short time and the study of this short span makes us complicated in search of the transition phenomena. It is believed that the beginning of early universe expansion, which is described by Big Bang theory, was very hot and subsequently it becomes cool down with the expansion of the universe. So phase transition is in due progress of universe's subsequent expansion. One way to explain the phase transition is through a process of studying very high temperature system obtained in the Laboratory. Another process about the phase transition is through the process of studying very high nuclear density matter. These very high nuclear density matter is normally obtained in the formation of compact objects, neutron star and boson stars~\cite{bub,ru,bh,bh1,dav}. These objects are believed to be found after the death of giant stars in fiery explosion called supernova explosion. For the investigation and search about the nature of the universe, several experimental facilities are set up around the globe like relativistic heavy-ion collision (RHIC) at BNL and large hadron collider (LHC) at CERN. These two experiments have examined about the creation and formation of our universe by colliding head on head with very energetic ion-beams. These experiments have also claimed for the creation of mini universe called quark-gluon plasma (QGP)~\cite{c,esh,cai}. So these two experiments give the information about matter at very high temperature. On other hand there are some experimental facilities like FAIR at Darmstadt and NICA at Dubna, where the study have focused on dense baryonic matter and the baryonic matter at Nuclotron (BM@N) experiments which extracted ion beams from modernized Nuclotron, will provide the future information about the formation of QGP under the influence of compressed dense nuclear matter~\cite{jpc,ba,arsene,back,adams,adcox}. All the facilities available so far are trying to detect the existence of the critical point in the phase structure, the early universe phase transition, formation of QGP and chromodynamics (QCD) phase structure at very high nuclear density. So, the investigation on quark-gluon plasma~(QGP) through the Ultra Relativistic Heavy-Ion Collisions has become an exciting field in the current scenario of heavy ion collider physics. In this review article, we focus on the calculation of bulk thermodynamic properties of these matter at very high temperature in continuation of our earlier works of zero loop and one loop correction incorporating the two loop correction~\cite{r1,s4}. It has been reported more stable droplets of QGP and the corresponding parametrization value used in the two loop has been largely affected in the droplet size formation. So the droplet evolution under two loop correction are more likely to predict the changes in the stability of droplet formation. \section{Potentials of loop correction} The interacting potential among the quarks and anti-quarks through zero-loop is defined in the following as \begin{eqnarray}\label{3.18} V_{\mbox{zero}}(p) &=& \frac{8 \pi}{p} \gamma~\alpha_{s}(p) T^{2} - \frac{m_{0}^{2}}{2 p}. \end{eqnarray} Then the potential which is obtained when one loop correction is introduced in the system. \begin{eqnarray}\label{3.18} V_{\mbox{one}}(p) &=& V_{zero} [1+\frac{\alpha_{s}(p)a_{1}}{4\pi} ]. \end{eqnarray} in which $\gamma$ is the parametrization value which is taken in terms of quark and gluon parametrization factors. The value is different depending on zero, one and two loop corrections~\cite{s1,s2}. In zero-loop case the value of quark and gluon parametrization is $ \gamma_{q}=1/6 $ and $\gamma_{g}=~ (6 - 8)~ \gamma_{q}$ for stable droplet formation whereas it is taken as $ \gamma_{q}=1/8 $ and $\gamma_{g}=~ (8 - 10)~ \gamma_{q}$ for one loop correction. So we extend to look for the potential for the case of two loop. \begin{eqnarray}\label{3.18} V_{\mbox{two}}(p) &=& V_{\mbox{zero}}[1+\frac{\alpha_{s}(p)a_{1}}{4\pi}+ \frac{\alpha_{s}^2(p)a_{2}}{16 \pi^2}], \end{eqnarray} The value of parametrization for the case of two loop correction to find the stable droplet formation is $ \gamma_{q}=1/14 $ and $\gamma_{g}=~ (48 - 52)~ \gamma_{q}$. All these factors determine the stability of droplet as well as dynamics of QGP flow similar to the standard Reynold's number of liquid flow and these parameters support in forwarding the transformation to bound state of matter called hadrons. In addition to these parameters we need to define another parameter known as correction loop parameter $a_{1}$ for one loop in the potential equation, which is obtained through the interacting potential among the particles. So the coefficient $a_{1}$ is one loop correction in their interactions and it is given as~\cite{brambilla,melnikov,hoang}: \begin{equation} a_{1}= 2.5833-0.2778~ n_{l},~ \end{equation} where $n_{l}$ is the number of light quark elements~\cite{fischler,billoire,smirnov,smir}. The value is defined for one loop correction and we extend our calculation into further interactions up to the factor of two loop correction. In the case of two loop correction the coefficient is obtained as: \begin{equation} a_{2}=28.5468 -4.1471 n_{l}+ 0.0772 n_{l}^2 \end{equation} Similarly we have the different mass factors depending on the loop. The mass of the corresponding one loop correction is defined as: \begin{equation} m_{one}^2(T)=2 \gamma^2 g^2(p) T^2[1+g^2(p) a_{1}]. \end{equation} whereas the mass of two loop correction it is: \begin{equation} m_{two}^2(T)=m_{one}^2(T)~+~2\gamma^2 g^{6}(p) T^2 a_{2}. \end{equation} These thermal masses obtained after incorporation of one and two loop corrections play pivotal role to find the interacting potential. Really the interaction potentials of loop are probably created due to the thermal effective mass of quarks and anti-quarks. Now to obtain our aim we further look the grand canonical ensemble through these loop corrections. \section{Grand canonical ensemble and free energy} We evaluate the grand canonical ensemble through the loop correction in the interaction potential of quarks, anti-quarks by the exchange of the coloured particle called gluon. So the evolution obtained through the ensemble is defined in the following through density of state of the system. The density of state is defined in such way that mean field potential through the loop correction is incorporated. The free energies of quarks, gluons and hadrons can be obtained through the thermodynamic canonical ensemble of the system. The general partition function of the system defined by many authors is given~\cite{satz} \begin{equation} Z(T,\mu,V)=Tr\{exp[-\beta(\hat{H}-\mu \hat{N})]\} \end{equation} where,$~\mu$ is chemical potential of the system, $\hat{N}$ is quark number and $\beta =\frac{1}{T}$. Using this partition function, we correlate the free energy through the density of states incorporating the loop correction factor, and it is calculated by~\cite{neergaad,ss,ss1}: \begin{equation} F_{i}=T\ln Z(T,\mu, V) \end{equation} \begin{equation}\label{3.20} F_i = \eta T g_i \int \rho_{i} (p) p^{2}\ln [1 +\eta e^{-(\sqrt{m_{i}^2 + p^2}-\mu) /T}] dp~, \end{equation} where $\rho_{i}$ is the corresponding density of states of loop, say~$i=$(zero-loop,~oneloop~and~twoloop),~$\eta=-1$ for the bosonic particle and $\eta=+$ for fermionic particles. In the formalism of these free energies we use the density of states which is derived through Thomas and Fermi model in which the corresponding unloop, one and two loop correction are incorporated in the interacting mean field potential. $g_{i}$ is degree of freedom for quarks and hadronic particles. The value of this degree of freedom is given as $6$ for quarks and $8$ for gluons. It is not a number in the case of hadronic particles which is defined as $g_{i}=d v/(2\pi^2)$ where $d$ is number factor like as degree of freedom of quark and gluon depending on the particular hadronic particle and $v=\frac{4}{3}\pi r^3$ represents as the volume of the hadron droplet. However, the density of states~$\rho_{i}$ for hadronic particles is unity with its momentum factor $p^{2}$ and it is defined for the case of quarks and gluon in a way that mean field potential through the loop correction is incorporated with unit momentum factor and corresponding density of states~\cite{ss3}. The density of states for quarks and gluon for the corresponding loops are defined below and when the loop is incorporated, the corresponding density of state is applied in the calculation of their free energies using the corresponding mean field potential. \begin{equation} \rho_{i} (p) = \frac{v}{3 \pi^{2}}\frac{dV^{3}_{conf}(p)}{dp}~, \end{equation} or, \begin{equation}\label{3.13} \rho_{zero}(p) =\frac{\nu}{\pi^2}\frac{\gamma^{9}T^6}{8} g^{6}(p) A, \end{equation} where, \begin{eqnarray} A&=&\frac{1}{p^2}[\frac{1}{p^2}+\frac{2}{(p^2+\Lambda^2)\ln(1+\frac{p^2}{\Lambda^2})}]. \end{eqnarray} whereas in the case of one loop correction then the density of state is obtained as: \begin{equation}\label{3.13} \rho_{one}(p) =\frac{\nu}{\pi^2}\frac{\gamma^{9}T^6}{8} g^{6}(p) B, \end{equation} where \begin{eqnarray} B&=&\lbrace 1+\frac{\alpha_{s}(p)a_{1}}{\pi}\rbrace^{2}[ \frac{(1+\alpha_{s}(p)a_{1}/\pi)}{p^{4}} \nonumber \\ &+&\frac{2 (1+2\alpha_{s}(p)a_{1}/\pi)}{p^{2}(p^2+\Lambda^2)\ln(1+\frac{p^2}{\Lambda^2})}] \end{eqnarray} Similarly, we obtain the density of state for two loop system as follows: \begin{equation}\label{3.13} \rho_{two}(p) =\frac{\nu}{\pi^2}\frac{\gamma^{9}T^6}{8} g^{6}(p) C, \end{equation} where \begin{eqnarray} C&=&[ 1+\frac{\alpha_{s}(p)a_{1}}{\pi}+\frac{\alpha_{s}^2(p)a_{2}}{\pi^2}]^{2} \nonumber \\ &\times&[ \frac{(1+\alpha_{s}(p)a_{1}/\pi +\alpha_{s}(p)^2a_{2}/\pi^2)}{p^{4}} \nonumber \\ &+&\frac{2 (1+2\alpha_{s}(p)a_{1}/\pi+3 \alpha_{s}(p)^2a_{2}/\pi^2)}{p^{2}(p^2+\Lambda^2)\ln(1+\frac{p^2}{\Lambda^2})}] \end{eqnarray} It implies that due to the correction factor, the density of states is perturbed by small factor which can be seen in the figure of potential vs momentum. In the expression, the parameter $~\Lambda $ is considered in the scale of QCD as ~$ 0.15~$GeV. So we can set up the free energy of the system by finding the energies of quarks, anti-quarks, gluons, all the light and medium light hadrons. The integral is evaluated from the least value of momentum approximately tending to zero. Taking and considering all the massive hadrons now the total free energy is calculated by adding the inter-facial energy of the fireball~\cite{ss2,ss4}. \begin{equation} F_{total}=\sum_{i} F_{i}~+~\frac{\gamma T R^{2}}{4}\int p^2 \delta(p-T) dp, \end{equation} ~ in the first term of the total free energy the summation in which $i$ stands for $u$,~$d$,~$s$~quarks and all the hadronic particles with gluon whereas in second term, it is inter-facial energy which replace the role of bag energy of MIT bag model in which Bag energy was introduced in the scale of $B^{1/4}=T_{c}$. Taking the inter-facial energy in place of MIT Bag energy, it can reduce the drawback produced by Bag energy to the maximum effects in comparison to MIT model calculation and this inter-facial energy is dependent on temperature and some parametric factor. So in the inter-facial energy, $R$ is size of QGP droplet with the parametrization factor. \begin{figure}[htb] \centering \includegraphics[width=7cm,clip]{PvsV.eps} \caption{ Potential ~ vs.~Momentum~with and without loop.} \label{fig-2} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=7cm,clip]{ga612T.eps} \caption{Free energy vs.~$R$~ at~$T=152$~MeV for zero-loop for contribution particles.} \label{fig-4} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=7cm,clip]{newbt12.eps} \caption{Free energy vs.~$R$~ at~$T=152$~MeV for one loop for contribution particles.} \label{fig-4} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=7cm,clip]{g6q.eps} \caption{ Free Energy~ vs.~$R$ (fm)~at~$\gamma_{q}=1/6~$, $\gamma_{g}=6\gamma_{q}$ for zero-loop correction.} \label{fig-2} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=7cm,clip]{agfig5.eps} \caption{ Free Energy~ vs.~$R$ (fm)~at~$\gamma_{q}=1/6~$, $\gamma_{g}=8\gamma_{q}$ for zero-loop correction.} \label{fig-2} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=7cm,clip]{newbt9.eps} \caption{ Free Energy~ vs.~R (fm)~at~$\gamma_{q}=1/8~$, $\gamma_{g}=9\gamma_{q}$for one loop correction} \label{fig-2} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=7cm,clip]{ga810.eps} \caption{ Free Energy~ vs.~R (fm)~at~$\gamma_{q}=1/8~$, $\gamma_{g}=10\gamma_{q}$for one loop correction} \label{fig-2} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=7cm,clip]{fig2.eps} \caption{ Free Energy~ vs.~R (fm)~at~$\gamma_{q}=1/14~$, $\gamma_{g}=48\gamma_{q}$ for two loop correction.} \label{fig-2} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=7cm,clip]{fig4.eps} \caption{ Free Energy~vs.~R (fm)~at~$\gamma_{q}=1/14~$, $\gamma_{g}=52\gamma_{q}$ for two loop correction.} \label{fig-2} \end{figure} \section{Results:} The analytical calculations of free energy of QGP-hadron fireball evolution without and with one and two loop correction factors in the interacting mean-field potential are performed by computing the variation of the interacting potential and momentum. The potential function is slightly perturbed from the unloop factor. These characteristic feature is shown in Fig.$1$. By the loop correction it is slightly increased at the lower momentum region and with increasing momentum, the perturbative contribution is negligible indicating that the perturbations of the one and two loop correction are very small in the high momentum transfer. Then we look forward the free energy change with droplet size of the different contributed particles for the unloop potential at a particular temperature say $T=152$~MeV as ad-hoc assumption. At this particular temperature $T=152~$MeV, we again look the change in behaviour of free energy of the constituted particles for one loop potential as considered in case of unloop potential and we obtain the similar behaviour with a slight difference in free energy amplitude. It implies that there is similar phenomenon of zero-loop and loop in the free energy graph with different amplitudes at any particular temperature. \par Now in Fig.$4$, we show free energy evolution for zero-loop potential for different temperatures at parametrization $\gamma_{q}=1/6$ and $\gamma_{g}=6 \gamma_{q}$. This particular parametrization $\gamma_{q}=1/6$ and $\gamma_{g}=6 \gamma_{q}$ which we chosed is due to the finding of the stable droplet. The value is obtained after ad-hoc search of stable droplet. So there is only one stable droplet formation for unloop potential. If we further look stable droplet by increasing the parameter beyond at $\gamma_{q}=1/6$ and $\gamma_{g}\ge 6 \gamma_{q}$, there is no more stable droplet at any other parametrization value which is represented by Fig.$5$. It means the stable droplet formation is obtained specially at parametrization value say $\gamma_{q}=1/6$~ and~$\gamma_{g}=6 \gamma_{q}$ and other droplets are not exactly stable in unloop potential even though droplets be formed. Beyond $\gamma_{g}>6 \gamma_{q}$ there is droplet but no stability. It denotes that the parameter value behaves like a representation to control in the behaviour of fluid dynamics in terms of its stable droplet formation. However when the value of $\gamma_{q}$ is not $1/6$ and $\gamma_{g}$ is not equal to six times of $\gamma_{q}$ then we can have unstable QGP formation and QGP fluid may be any sorts of unpredictable fluid dynamics. In Fig.$6$ and $7$, there are free energy representations of one loop showing smaller droplet size with different temperatures. It indicates that we have found similar stable droplets at the parametrization values $\gamma_{q}=1/8$ and $8 \gamma_{q}\le \gamma_{g} \le 10 \gamma_{q}$ in the case of additional oneloop potential. \par In Fig.$6$ it shows changing slightly in the stable droplet size from droplet of unloop potential. Similarly in Fig.$7$ stable droplet is available to obtain by changing the parametrization value to $\gamma_{g}\le 10 \gamma_{q}$. From these two presentations, there are almost around two stable droplets formations and these stability's are only found in the range of parametrisation $8 \gamma_{q}\le \gamma_{g}\le 10\gamma_{q}$. In a similar way, we look phenomenologically about the evolution of free energy by adding twoloop correction in the potential. The behaviour of droplets are shown in Fig.$8$ and Fig.$9$. The behaviour of evolution of free energy with the increase of QGP drop size is much smaller in comparison to earlier droplets obtained by unloop and one loop correction in the potential. The addition of such two loop correction shows the formation of stable droplet modified a lot and there is change in the parameter factor also. The stable droplet is still observed for different parameter values. The values are found to be $\gamma_{q}=1/14$ and $48 \gamma_{q} \le \gamma_{g}\le 52\gamma_{q}$. Only at these values we can observe the stable QGP droplet for two loop correction. Beyond these parametrization, QGP fluid remains like unstable behaviour of zero-loop type with having different in-oriental fluid dynamics and represent unstable droplets. These characteristic features are all described in the corresponding figures $8$ and $9$. With the incorporation of two loop, the droplet size is much smaller indicating highly stable. The smaller the size, surface tension of drop is large and the liquid drops are tightly bounded so that the droplet is more stable. So parametrization factors for zero-loop, one loop and two loop play a very much important role in finding the stable droplet in the evolution of QGP droplet formation and it is really significant parameter for formation of QGP droplets. \section{conclusion:} We can conclude from these results that due to the presence of loop corrections in the mean field potential, the stability is found to be much better in the case of two loop correction by the smaller size of droplet which is not comparable in terms of one loop correction and without loop correction where sizes of droplets are found to be bigger. So loop corrections in the potential can be studied as phenomenological model for describing the droplet formation of QGP taken as a dynamical parameter. \subsection{\bf Acknowledgments:} We are very much thankful to our retired Prof. R Ramanathan for his untiring work in support of preparing the manuscript with his critical reading and many discussion about the possible outcomes of the manuscript.
proofpile-arXiv_059-15758
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction}\label{sec:intro} The class of BCK-algebras was introduced in 1966 by Imai and Is\'{e}ki \cite{II66} as the algebraic semantics for a non-classical logic having only implication. This implicational calculus is evidently due to Tarski and Bernays, but Is\'{e}ki also credits Meredith in \cite{iseki66}. The origin of the terms B, C, and K is the combinatory logic of Sch\"{o}nfinkel \cite{schon24} and Curry \cite{curry30} from the 1920's and 1930's. The class $\mt{BCK}$ of BCK-algebras is not a variety (\cite{wronski83}), but many subclasses do form varieties. In this paper we focus on the variety of commutative BCK-algebras, denoted $\mt{cBCK}$, which has ties to many other algebraic structures including MV-algebras, lattice-ordered Abelian groups, BCI-algebras, AF $C^\ast$-algebras, \L ukasiewicz algebras, commutative integral residuated lattices, and others. We note, for exmaple, that the variety of bounded commutative BCK-algebras is term-equivalent to the variety of MV-algebras \cite{mundici86}. The core of this paper deals with a topological representation for commutative BCK-algebras. The idea of representing an algebraic structure with a topological space dates back to Stone's pioneering work \cite{stone36} which provides a dual equivalence between the category of Boolean algebras, $\mt{BA}$, and the category of Stones spaces, $\mt{Stone}$. This was later extended in \cite{pries70} to a dual equivalence between $\mt{BDL}$, the category of bounded distributive lattices, and the category $\mt{Pries}$ of Priestley spaces; as is well-known these two equivalences both arise as natural dualities (\cite{clarkdavey98}). One may wonder whether this type of natural duality is possible for $\mt{cBCK}$. The representation we develop here will not lead to a dual equivalence: many non-isomorphic algebras will have the same spectrum (up to homeomorphism). Further, in \cite{niederkorn00}, Niederkorn showed that the variety $\mt{bcBCK}$ of bounded commutative BCK-algebras is not dualisable. The signature for the variety $\mt{bcBCK}$ is not the same as that of $\mt{cBCK}$, so this does not rule out the possibility that $\mt{cBCK}$ could be dualisable. While we would conjecture that $\mt{cBCK}$ is not dualisable, to the author's knowledge this is an open problem. However, the representing spaces are still interesting objects in their own right, and point toward some interesting connections between commutative BCK-algebras and commutative rings. The organization of this paper is as follows: in section 2 we give an overview of the necessary background of commutative BCK-algebras and the basics of their ideal theory. We also provide some examples that will be used throughout the paper. In section 3 we describe two constructions for building commutative BCK-algebras, and characterize both their ideal lattices and prime ideal lattices. The first construction is a disjoint union-type construction that seems to have first appeared in \cite{it76}. Despite this being a known construction, the characterization of the prime ideal lattices is new. The second construction involves the use of rooted trees to define commutative BCK-algebras, and this construction is new. We also show in this section that any finite subdirectly irreducible distributive p-algebra occurs as the ideal lattice of a commutative BCK-algebra. In section 4 we define the spectrum of a commutative BCK-algebra and consider its topological properties. In particular, we show that the spectrum of a commutative BCK-algebra is a locally compact generalized spectral space which is compact if and only if the algebra is finitely generated as an ideal. We also show that when the algebra is involutory, the spectrum is a Priestley space. In section 5 we discuss the functoriality of the spectrum and define a functor $\K\mrm{X}$ from $\mt{cBCK}$ to $\mt{DL_0}$, the category of distributive lattices with 0. By focusing our attention on Noetherian spectra, we give a partial answer to the question: what lattices lie in the image of $\K\mrm{X}$? In giving this partial answer, we are finding distributive lattices that occur both as the lattices of compact open subsets of the spectra of some cBCK-algebras, and as the ideal lattices of some cBCK-algebras. We note that this paper is an adaptation of the author's dissertation \cite{evans20}, and that the results of this paper are contained in \cite{evans20} in some form, with the exception of section \ref{disjoint union in gspec}. \section{Preliminaries} \begin{definition} A \textit{commutative BCK-algebra} is an algebra $\langle A; \boldsymbol{\cdot}, 0\rangle$ of type $(2,0)$ such that \begin{enumerate} \item[]\hspace{-1cm} (cBCK1)\; $(x\boldsymbol{\cdot} y)\boldsymbol{\cdot} z=(x\boldsymbol{\cdot} z)\boldsymbol{\cdot} y$ \item[]\hspace{-1cm} (cBCK2)\; $x\boldsymbol{\cdot}(x\boldsymbol{\cdot} y)=y\boldsymbol{\cdot}(y\boldsymbol{\cdot} x)$ \item[]\hspace{-1cm} (cBCK3)\; $x\boldsymbol{\cdot} x=0$ \item[]\hspace{-1cm} (cBCK4)\; $x\boldsymbol{\cdot} 0=x$ \end{enumerate} for all $x,y,z\in A$. \end{definition} Throughout, we will write $\mb{A}=\langle A; \boldsymbol{\cdot}, 0\rangle$, and we will refer to commutative BCK-algebras as cBCK-algebras. Denote the variety of cBCK-algebras by $\mt{cBCK}$. If $\mb{A}=\langle A; \boldsymbol{\cdot}_A, 0_A\rangle$ and $\mb{B}=\langle B; \boldsymbol{\cdot}_B, 0_B\rangle$ are cBCK-algebras, we say a function $h\colon\mb{A}\to \mb{B}$ is a \textit{BCK-homomorph\-ism} if $h(x\boldsymbol{\cdot}_A y)=h(x)\boldsymbol{\cdot}_B h(y)$ for all $x,y\in A$. We note that any BCK-homomorphism is also 0-preserving due to (cBCK3). The notation $\mt{cBCK}$ will also denote the category with cBCK-algebras as objects and BCK-homomorphisms as morphisms. For the elementary properties of cBCK-algebras, we point the reader to Is\'{e}ki and Tanaka's introductory papers \cite{it78} and \cite{it76}, Tana\-ka's paper \cite{tanaka75}, Romanowska and Traczyk's paper \cite{rt80}, Traczyk's paper \cite{traczyk79}, Yutani's paper \cite{yutani77}, and the text \cite{mj94} by Meng and Jun. We collect here a few important properties. \begin{proposition}[\cite{it78}]\label{basic properties} Let $\mb{A}$ be a cBCK-algebra. \begin{enumerate} \item $\mb{A}$ is partially ordered via: $x\leq y$ if and only if $x\boldsymbol{\cdot} y=0$. \item $0\boldsymbol{\cdot} x=0$ for all $x\in A$, so $0$ is the least element in $\mb{A}$. \item The operation $\boldsymbol{\cdot}$ is right isotone; that is, if $x\leq y$, then $z\boldsymbol{\cdot} x\leq z\boldsymbol{\cdot} y$. \item The operation $\boldsymbol{\cdot}$ is left antitone; that is, if $x\leq y$, then $y\boldsymbol{\cdot} z\leq x\boldsymbol{\cdot} z$. \item The term operation $x\wedge y:=y\boldsymbol{\cdot} (y\boldsymbol{\cdot} x)$ is the greatest lower bound of $x$ and $y$. \item $x\boldsymbol{\cdot} y\leq x$ with equality if and only if $x\wedge y=0$. \item $\mb{A}$ is a semilattice with respect to $\wedge$. \end{enumerate} \end{proposition} The identity (cBCK2) tells us $x\wedge y=y\wedge x$; these algebras are called ``commutative'' because of this. We say a cBCK-algebra $\mb{A}$ is \textit{bounded} if there is an element $1\in A$ such that $x\boldsymbol{\cdot} 1=0$ for all $x\in A$, so $x\leq 1$ for all $x\in A$. The class of bounded cBCK-algebras may be considered as a variety as well; that is, an algebra $\mb{A}=\langle A; \boldsymbol{\cdot}, 0, 1\rangle$ of type $(2,0,0)$ is a bounded commutative BCK-algebra if it satisfies (cBCK1)-(cBCK4) as well as $x\boldsymbol{\cdot} 1=0$ for all $x\in A$. This variety will be denoted $\mt{bcBCK}$. Given a bounded cBCK-algebra $\mb{A}$ the term operation \[x\vee y := 1\boldsymbol{\cdot}\bigl( (1\boldsymbol{\cdot} x)\wedge (1\boldsymbol{\cdot} y)\bigr)\,,\] gives the least upper bound of $x$ and $y$. Is\'{e}ki and Tanaka showed in \cite{it78} that the term-reduct $\mb{A}^{\text{d}}=\langle A; \wedge, \vee\rangle$ is a lattice, while Traczyk showed in \cite{traczyk79} that $\mb{A}^\text{d}$ is a distributive lattice. \subsection{Examples} There are many natural examples of cBCK-algebras. We focus our attention on those examples which will be most useful for our purposes. The set of non-negative reals $\bb{R}_{\geq 0}$ becomes a cBCK-algebra via the operation $x\boldsymbol{\cdot} y=\max\{x-y, 0\}$. We denote this algebra by $\mb{R}^+$. This truncated difference is the prototypical operation for a generic cBCK-algebra. From this algebra we obtain important subalgebras. The non-negative integers $\bb{N}_0:=\bb{N}\cup \{0\}$ is a cBCK-subalgebra of $\mb{R}^+$ which we will denote $\mb{N}_0$. If we let $I=[0,1]$ denote the unit interval in $\bb{R}$, then we obtain a cBCK-subalgebra $\mb{I}$ of $\mb{R}^+$. Putting $Q=I\cap \bb{Q}$, we have another cBCK-subalgebra of $\mb{R}^+$ which we will denote $\mb{Q}$. For $k\in\bb{N}$, let $C_k=\bigl\{0, \frac{1}{k}, \frac{2}{k},\ldots, \frac{k-1}{k}, 1\bigr\}$. This is a cBCK-subalgebra of $\mb{I}$ we will denote $\mb{C}_k$. In particular, $\mb{C}_1$ is just the two-element cBCK-algebra with universe $\{0,1\}$. \begin{remark} The variety $\mt{bcBCK}$ is generated by $\mb{I}$; that is, $\mt{bcBCK}=\text{HSP}(\mb{I})$. This is essentially the content of Chang's Completeness Theorem for many-valued logic, see \cite{chang58} and \cite{chang59}. Chang's proof is in the language of MV-algebras, but Mundici showed in \cite{mundici86} that MV-algebras and bcBCK-algebras are term-equivalent. Further, we also have that $\mt{bcBCK}=\text{HSP}(\mb{C}_1, \mb{C}_2, \mb{C}_3, \ldots)$ (see Proposition 8.1.2 of \cite{CDM00}), but $\mt{bcBCK}$ is not finitely generated (see \cite{cornish80}). \end{remark} \subsection{Ideals} \begin{definition} A subset $I\subseteq A$ of a cBCK-algebra $\mb{A}$ is an \textit{ideal} if $0\in I$ and the following implication is satisfied: if $x\boldsymbol{\cdot} y\in I$ and $y\in I$, then $x\in I$. \end{definition} Every ideal is a down-set: take $y\in I$ and $x\in \mb{A}$ with $x\leq y$. Then $x\boldsymbol{\cdot} y=0\in I$, and since $y\in I$ we must have $x\in I$. \begin{definition} Let $\mb{A}$ be a cBCK-algebra and $P$ an ideal of $\mb{A}$. \begin{enumerate} \item We say $P$ is \textit{proper} if $P\neq \mb{A}$. \item We say $P$ is a \textit{prime ideal} if it is proper and $x\wedge y\in P$ implies $x\in P$ or $y\in P$. \item We say $P$ is an \textit{irreducible ideal} if, whenever $I\cap J=P$ for ideals $I$ and $J$, we have $I=P$ or $J=P$. \item We say $P$ is a \textit{meet-prime ideal} if, whenever $I\cap J\subseteq P$ for ideals $I$ and $J$, we have $I\subseteq P$ or $J\subseteq P$. \end{enumerate} \end{definition} Pa\l asinski proved in \cite{palasinski81} that prime ideals, irreducible ideals, and meet-prime ideals all coincide in a cBCK-algebra. Given a cBCK-algebra $\mb{A}$, we will denote the collection of all ideals of $\mb{A}$ by $\id(\mb{A})$ and the collection of all prime ideals of $\mb{A}$ by $\mrm{X}(\mb{A})$. As lattices, $\id(\mb{A})\cong\con(\mb{A})$. A proof can be found in \cite{rt80}, \cite{yutani77}, or \cite{AT77}, but we give the idea here: for $I\in\id(\mb{A})$, define $\theta_I\subseteq A\times A$ by $(x,y)\in \theta_I$ if and only if $x\boldsymbol{\cdot} y\in I$ and $y\boldsymbol{\cdot} x\in I$. The map sending $I\mapsto \theta_I$ is a lattice isomorphism. The inverse is $\theta\mapsto [0]_\theta$, where $[0]_\theta$ is the equivalence class of $0$. The lattice $\id(\mb{A}$) is also known to be distributive, see Lemma 3.2 of \cite{rt80}. From this and the previous paragraph, it follows that $\mt{cBCK}$ is a congruence-distributive variety. Given a subset $S$ of a cBCK-algebra $\mb{A}$, the \textit{ideal generated by $S$}, denoted $(S]$, is the smallest ideal containing $S$. Is\'{e}ki and Tanaka provide a very nice characterization of $(S]$. \begin{theorem}[\cite{it76}, Theorem 3]\label{iseki ideal theorem} Let $S$ be a subset of a cBCK-algebra $\mb{A}$. Then $x\in(S]$ if and only if there exist $s_1,\ldots, s_n\in S$ such that \[\bigl(\cdots \bigl((x\boldsymbol{\cdot} s_1)\boldsymbol{\cdot} s_2\bigr)\boldsymbol{\cdot} \cdots \boldsymbol{\cdot} s_{n-1}\bigr)\boldsymbol{\cdot} s_n=0\,.\] \end{theorem} If $S=\{x_1, x_2,\ldots, x_k\}$, we may write $(x_1, x_2, \ldots, x_k]$ rather than $(S]$. In particular, for singleton subsets $\{x\}$ we will write $(x]$. For any cBCK-algebra $\mb{A}$, the subsets $\{0_A\}$ and $A$ are always ideals. If these are the only ideals, we say $\mb{A}$ is \textit{simple}. For $x,y\in\mb{A}$, we define the notation $x\boldsymbol{\cdot} y^n$ for $n\in \bb{N}_0$ recursively by \begin{align*} x\boldsymbol{\cdot} y^0&=x\\ x\boldsymbol{\cdot} y^n&=\bigl(x\boldsymbol{\cdot} y^{n-1}\bigr)\boldsymbol{\cdot} y\,. \end{align*} Since $x\boldsymbol{\cdot} y\leq x$, any pair $x,y\in\mb{A}$ gives us a decreasing sequence \[x\boldsymbol{\cdot} y^0\geq x\boldsymbol{\cdot} y^1\geq x\boldsymbol{\cdot} y^2\geq \cdots \geq x\boldsymbol{\cdot} y^n\geq\cdots\,.\] If the underlying poset of a cBCK-algebra is totally ordered, we will call it a \textit{cBCK-chain}. For example, all of the algebras $\mb{R}^{+}$, $\mb{N}_0$, $\mb{I}$, $\mb{Q}$, and $\mb{C}_k$ for $k\in \bb{N}$ are cBCK-chains. \begin{proposition} A cBCK-chain is simple if and only if, for any $x,y\in \mb{A}$, $y\neq 0$, there is a natural number $n$ such that $x\boldsymbol{\cdot} y^n=0$. \end{proposition} This result is stated without proof in \cite{rt82}. For the sake of completeness we provide a proof here. \begin{proof} Assume first that $\mb{A}$ is simple, and take $x,y\in \mb{A}$ with $y\neq 0$. Consider the ideal $(y]$. By simplicity we must have $(y]=\mb{A}$ since $y\neq 0$. But this means $x\in(y]$; therefore, by Theorem \ref{iseki ideal theorem}, there exists $n\in\bb{N}$ such that $x\boldsymbol{\cdot} y^n=0$. On the other hand, assume for any pair $x,y\in\mb{A}$ with $y\neq 0$ that there exists $n\in\bb{N}$ such that $x\boldsymbol{\cdot} y^n=0$. Let $I$ be a non-zero ideal of $\mb{A}$. Take $z\in \mb{A}$ and $y\neq 0$ in $I$. By hypothesis there is some $k\in\bb{N}$ such that $z\boldsymbol{\cdot} y^k=0\in I$. Since $y\in I$, we repeatedly apply the ideal property to obtain $z\in I$. Hence, $I=\mb{A}$ and $\mb{A}$ is simple. \end{proof} From this it follows that $\mb{R}^+$, $\mb{N}_0$, $\mb{I}$, $\mb{Q}$, and $\mb{C}_k$ are all simple. For some cBCK-chains that are not simple, see Examples \ref{chain of length n} and \ref{countable chain}. We note that the ideals of any cBCK-chain are linearly ordered and the prime ideals of any cBCK-chain are precisely the proper ideals; see Lemmas 2.3.1 and 2.3.2 of \cite{evans20} for proofs. \subsection{Involutory algebras} Let $\mb{A}=\langle A; \boldsymbol{\cdot}, 0\rangle$ be a cBCK-algebra. \begin{definition} For $S\subseteq A$, the \textit{annihilator of $S$} is \[S^\ast:=\{a\in \mb{A}\mid a\wedge s=0\text{ for all } s\in S\}\,.\] \end{definition} Aslam and Thaheem prove in \cite{AT91} that $(-)^\ast$ is a Galois connection and that $S^\ast$ is an ideal of $\mb{A}$. \begin{definition} We say that an ideal $I$ of $\mb{A}$ is \textit{involutory} if $I=I^{\ast\ast}$. We say the algebra $\mb{A}$ is \textit{involutory} if every ideal is involutory. \end{definition} The zero ideal $\{0\}$ and $\mb{A}$ itself are always involutory, and therefore any simple cBCK-algebra is an involutory algebra. We say that $\mb{A}$ \textit{satisfies the descending chain condition} if the sequence \[x\geq x\boldsymbol{\cdot} y\geq x\boldsymbol{\cdot} y^2\geq \cdots \geq x\boldsymbol{\cdot} y^n\geq \cdots \,.\] stabilizes for any pair $x,y\in \mb{A}$; that is, for each pair $x,y\in \mb{A}$ there is some $n\in\bb{N}_0$ such that $x\boldsymbol{\cdot} y^n=x\boldsymbol{\cdot} y^{n+1}$. One can show that $\mb{A}$ is involutory if and only if it satisfies the descending chain condition; see Theorem 3.10 of\cite{AT91} together with Theorem 3.3 of \cite{xin01}. For an integer $n\geq 1$, consider the identity \begin{align*} x\boldsymbol{\cdot} y^n = x\boldsymbol{\cdot} y^{n+1}\tag{$\text{E}_n$}\,. \end{align*} Any cBCK-algebra $\mb{A}$ satisfying ($\text{E}_1$) also must satisfy the identity $x\boldsymbol{\cdot} (y\boldsymbol{\cdot} x)=x$ and is said to be \textit{implicative}. We note also that any bounded implicative BCK-algebra is a Boolean algebra (\cite{it78}). Varieties of BCK-algebras satisying ($\text{E}_n$) are discussed in detail in \cite{cornish80} and \cite{dyrda87}. \begin{lemma} If a cBCK-algebra $\mb{A}$ is finite, locally finite, or satisfies ($\text{E}_n$) for some $n$, then $\mb{A}$ is involutory. \end{lemma} \begin{proof} If $\mb{A}$ is finite or satisfies ($\text{E}_n$) for some $n$, clearly any decreasing sequence will stabilize. Suppose $\mb{A}$ is locally finite. Take $x,y\in\mb{A}$ and consider $\langle x,y\rangle$, the subalgebra generated by $x$ and $y$. This subalgebra contains the sequence $(x\boldsymbol{\cdot} y^n)_{n\in\bb{N}_0}$, but it is a finitely-generated subalgebra and hence finite. Thus, the sequence $(x\boldsymbol{\cdot} y^n)_{n\in\bb{N}_0}$ must stabilize. In all three cases, the algebra satisfies the descending chain condition and is therefore involutory. \end{proof} \section{Two constructions} In this section we describe two methods of building cBCK-algebras. For each construction we characterize the ideals and prime ideals. \subsection{cBCK-unions}\label{unions} Let $\Lambda$ be an index set and $\bigl\{\mb{A}_\lambda\bigr\}_{\lambda\in\Lambda}$ a family of cBCK-algebras. Suppose further that $A_\lambda\cap A_\mu=\{0\}$ for $\lambda\neq \mu$ and let $U$ be the union of the $A_\lambda$'s. We will use the notation $U=\bigcupdot_{\lambda\in\Lambda} A_\lambda$. Equipping $U$ with the operation \[x\boldsymbol{\cdot} y=\begin{cases} x\boldsymbol{\cdot}_\lambda y & \text{ if $x,y\in A_\lambda$}\\vec{x} & \text{ otherwise}\end{cases}\;,\] where $\boldsymbol{\cdot}_\lambda$ is the BCK-operation in $\mb{A}_\lambda$, yields a new cBCK-algebra which we will denote as $\mb{U}=\bigcupdot_{\lambda\in\Lambda} \mb{A}_\lambda$. We will refer to $\mb{U}$ as a \textit{cBCK-union}. That this construction does indeed yield a cBCK-algebra is proven in \cite{it76} in the case $|\Lambda|=2$. Extending the proof to arbitrary $\Lambda$ is tedious but straightforward; a full proof can be found in the author's dissertation \cite{evans20} (Proposition 2.2.1). Note that if $x\in\mb{A}_\lambda$ and $y\in\mb{A}_\mu$ with $\lambda\neq \mu$, then $x\wedge y=0$ in $\mb{U}$. Ideals and prime ideals in a cBCK-union are very well behaved. \begin{proposition}[\cite{yutani80}, Propositions 3 and 4]\label{ideals_in_union} A subset $I\subseteq U$ is an ideal of $\mb{U}=\bigcupdot_{\lambda\in\Lambda} \mb{A}_\lambda$ if and only if $I=\bigcupdot_{\lambda\in\Lambda} I_\lambda$, where $I_\lambda\in\id(\mb{A}_\lambda)$. For a given ideal $I$, this decomposition is unique.\end{proposition} \begin{theorem}\label{primes_in_union} Let $\mb{U}=\bigcupdot_{\lambda\in\Lambda}\mb{A}_\lambda$. An ideal $P$ of $\mb{U}$ is prime if and only if there exists $\mu\in\Lambda$ and $Q\in\mrm{X}(\mb{A}_\mu)$ so that \[P=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\mu}^Q\,,\] where $\mb{A}_{\lambda,\mu}^Q=\left.\begin{cases}\mb{A}_\lambda&\text{if $\lambda\neq\mu$}\\mb{Q}&\text{if $\lambda=\mu$}\end{cases}\right\}\,.$ \end{theorem} \begin{proof} First suppose $P=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\mu}^Q$ for some $\mu\in\Lambda$ and $Q\in\mrm{X}(\mb{A}_\mu)$. By Proposition \ref{ideals_in_union}, we see that $P$ is an ideal of $\mb{U}$. Suppose $x\wedge y\in P$ but $x,y\notin P$. Then $x,y\in\mb{A}_\mu\setminus Q$, but since $Q$ is prime in $\mb{A}_\mu$ we must have $x\wedge y\notin Q$. This is a contradiction since $x\wedge y\notin Q$ implies $x\wedge y\notin P$. So $P$ must be prime. On the other hand, let $P$ be a prime ideal of $\mb{U}$. Then $P=\bigcupdot_{\lambda\in\Lambda} I_\lambda$ for ideals $I_\lambda\in\id(\mb{A}_\lambda)$. If there are indices $\alpha\neq\beta$ such that $I_\alpha\neq \mb{A}_\alpha$ and $I_\beta\neq \mb{A}_\beta$, choose $x\in \mb{A}_\alpha\setminus I_\alpha$ and $y\in \mb{A}_\beta\setminus I_\beta$. Then $x\wedge y=0\in P$, but $x,y\notin P$, a contradiction. So $I_\lambda=\mb{A}_\lambda$ for all but at most one index. But prime ideals are proper, so we have $I_\lambda=\mb{A}_\lambda$ for all but exactly one index, say $\mu$. We claim that $I_\mu$ is a prime ideal of $\mb{A}_\mu$. Take $a,b\in\mb{A}_\mu\setminus I_\mu$. Then $a,b\notin P$. Since $P$ is prime we have $a\wedge b\notin P$. Thus, $a\wedge b\notin I_\mu$, meaning $I_\mu$ is prime in $\mb{A}_\mu$. Therefore, $P$ is of the desired form. \end{proof} We take a moment to consider when a cBCK-union is involutory. In Theorem \ref{invol implies pries} we will see that an algebra $\mb{A}$ being involutory gives a great deal of information about its spectrum. \begin{lemma}\label{ann_of_union} If $I=\bigcupdot_{\lambda\in\Lambda} I_\lambda\in\id(\mb{U})$, then \[I^\ast=\bigl(\,\bigcupdot_{\lambda\in\Lambda} I_\lambda\,\bigr)^\ast=\bigcupdot_{\lambda\in\Lambda} I_\lambda^\ast\,.\] \end{lemma} \begin{proof} Take $x\in I^\ast$. Since $x\in\mb{U}$, we have $x\in \mb{A}_\alpha$ for some $\alpha\in\Lambda$. Then for any $y\in I_\alpha$, we have $x\wedge y=0$, so $x\in I_\alpha^\ast\subseteq \bigcupdot_{\lambda\in\Lambda} I_\lambda^\ast$. Thus, $I^\ast\subseteq \bigcupdot_{\lambda\in\Lambda} I_\lambda^\ast$. For the other inclusion, take $x\in\bigcupdot_{\lambda\in\Lambda} I_\lambda^\ast$. Then $x\in I_\alpha^\ast$ for some $\alpha\in\Lambda$, and $x\wedge y=0$ for all $y\in I_\alpha$. If we take $z\in I_\beta$ for any $\beta\neq\alpha$, then $x\wedge z=0$ since $x\in \mb{A}_\alpha$ and $z\in \mb{A}_\beta$. Hence $x\in\bigl(\,\bigcupdot_{\lambda\in\Lambda} I_\lambda\,\bigr)^\ast=I^\ast$, and thus $\bigcupdot_{\lambda\in\Lambda} I_\lambda^\ast\subseteq I^\ast$. \end{proof} \begin{theorem}\label{involutory_union} The algebra $\mb{U}=\bigcupdot_{\lambda\in\Lambda} \mb{A}_\lambda$ is involutory if and only if each $\mb{A}_\lambda$ is involutory. \end{theorem} \begin{proof} This follows from Proposition \ref{ideals_in_union} and Lemma \ref{ann_of_union}. \end{proof} \subsection{cBCK-algebras associated to trees} Let $T$ be a rooted tree; we will use Greek letters to indicate elements of the vertex set $V(T)$, and in particular we will use $\lambda$ to indicate the root of $T$. Denote by $\bb{Z}^T$ the set of all functions $V(T)\to\bb{Z}$. Let $A^T$ be the subset of $\bb{Z}^T$ consisting of all functions $\u\colon V(T)\to\bb{Z}$ with finitely many non-zero entries and where the first non-zero entry along every root-based path is positive. For an element $\u\in A^T$ and a vertex $\alpha\in V(T)$, we will write $u_\alpha$ to indicate the value of $\u$ at $\alpha$. For a root-based path $p$ we will write $\u_p$ for the ``sub-tuple'' of $\u$ corresponding to the values of $\u$ along the path $p$. If $p$ is an interval in $T$, say $p=[\lambda, \alpha]$, we may write $\u_{[\lambda, \alpha]}$ rather than $\u_p$. On occassion we will use other standard interval notations, particularly $[\lambda, \alpha)$, indicating the interval from $\lambda$ to $\alpha$, but excluding $\alpha$. We will write $\mb{0}$ for the zero function. For the sake of clarity, we provide a small example. \begin{example} Figure \ref{fig:tree1} shows a rooted tree $T$ and an element $\u\in A^T$. \begin{figure}[h] \centering \begin{tikzpicture} \filldraw (0,0) circle (2pt); \filldraw (0,1) circle (2pt); \filldraw (-1,1) circle (2pt); \filldraw (1,1) circle (2pt); \filldraw (-.5,2) circle (2pt); \filldraw (.5,2) circle (2pt); \filldraw (1.5,2) circle (2pt); \draw [-] (0,1) -- (0,0); \draw [-] (-1,1) -- (0,0); \draw [-] (1,1) -- (0,0); \draw [-] (0,1) -- (-.5,2); \draw [-] (0,1) -- (.5,2); \draw [-] (1,1) -- (1.5,2); \node at (0,-.3) {\small $\lambda$}; \node at (-1.3, 1) {\small $\alpha$}; \node at (-.3, 1) {\small $\beta$}; \node at (1.3, 1) {\small $\gamma$}; \node at (-.5, 2.3) {\small $\delta$}; \node at (.5, 2.3) {\small $\mu$}; \node at (1.5, 2.3) {\small $\nu$}; \node at (0,-1) {$T$}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture} \filldraw (0,0) circle (2pt); \filldraw (0,1) circle (2pt); \filldraw (-1,1) circle (2pt); \filldraw (1,1) circle (2pt); \filldraw (-.5,2) circle (2pt); \filldraw (.5,2) circle (2pt); \filldraw (1.5,2) circle (2pt); \draw [-] (0,1) -- (0,0); \draw [-] (-1,1) -- (0,0); \draw [-] (1,1) -- (0,0); \draw [-] (0,1) -- (-.5,2); \draw [-] (0,1) -- (.5,2); \draw [-] (1,1) -- (1.5,2); \node at (0,-.3) {\small $0$}; \node at (-1.3, 1) {\small $0$}; \node at (-.3, 1) {\small $2$}; \node at (1.3, 1) {\small $0$}; \node at (-.5, 2.3) {\small $-1$}; \node at (.5, 2.3) {\small $0$}; \node at (1.5, 2.3) {\small $3$}; \node at (0,-1) {$\u\in A^T$}; \end{tikzpicture} \caption{A rooted tree $T$ and an element $\u\in A^T$}\label{fig:tree1} \end{figure} So $u_\beta=2$ and $u_\delta=-1$ while $\u_{[\lambda,\beta]}=(0,2)$ and $\u_{[\lambda,\delta]}=(0,2,-1)$. \end{example} For $\u\in A^T$ and any root-based path $p$, we see that $\u_p$ is a $\bb{Z}$-valued $|p|$-tuple. Let $\leq_\ell$ denote the lexicographic order on the set of $|p|$-tuples. We will also have occasion to use the fact that $V(T)$ is partially ordered as well. We will write $\alpha\leq_T\beta$ to indicate that the vertices $\alpha$ and $\beta$ are comparable with $\alpha$ at or below $\beta$ in $T$. That is, $\alpha\leq_T\beta$ if and only if $\alpha$ is an ancestor of $\beta$. For example, $\lambda\leq_T\alpha$ for all $\alpha\in V(T)$. Define an operation on $A^T$ as follows: for $\u, \vec{v}\in A^T$ and $\alpha\in V(T)$, \[(\u\boldsymbol{\cdot} \vec{v})_\alpha= \begin{cases}u_\alpha-v_\alpha & \text{ if $\u_{[\lambda, \alpha]}>_\ell \vec{v}_{[\lambda, \alpha]}$}\\ 0 & \text{ if $\u_{[\lambda, \alpha]}\leq_\ell \vec{v}_{[\lambda, \alpha]}$} \end{cases}\,.\] Under this operation, $A^T$ becomes a cBCK-algebra which we will denote $\mb{A}^T$. The proof of this is straightforward, though rather tedious. We refer the reader to the author's dissertation \cite{evans20} (Proposition 2.4.2) for the proof. The cBCK-order that arises can be described as follows: $\u\leq \vec{v}$ if and only if $\u_p\leq_\ell \vec{v}_p$ for every root-based path $p$. See again \cite{evans20}. \begin{remark} In \cite{cornish81}, Cornish also used rooted trees to construct cBCK-algebras: if a rooted tree $T$ (thought of as a lower semilattice with 0) admits a valuation $v$ in a cBCK-algebra $\mb{C}$, then one can define a BCK-operation on $T$ yielding a commutative BCK-algebra. In some cases, Cornish's construction gives back known examples; for instance, his Example 1.3 is the same as the cBCK-union defined in subsection \ref{unions} of the present paper. This example first appears in \cite{it76}. However, Cornish's construction is quite different from the construction presented here. Given a rooted tree $T$ with a valuation $v\colon T\to \mb{C}$ in a cBCK-algebra $\mb{C}$, Cornish's construction produces a cBCK-structure on the \textit{labels} of the vertices of $T$. In particular, if we begin with a finite rooted tree, the corresponding cBCK-algebra is finite. By contrast, using the construction defined above, each element of $\mb{A}^T$ is itself a $\bb{Z}$-labeling of $T$, and $\mb{A}^T$ is always infinite. To give a concrete example of the disinction between these two constructions, consider the two-element chain $\ch_2=\{\lambda, \alpha\}$ with $\lambda<\alpha$. Cornish's construction applied to $\ch_2$ gives (an isomorphic copy of) the unique two-element cBCK-algebra, which is simple (and Boolean). Using our construction defined above, the algebra $\mb{A}^{\ch_2}$ is countably infinite and has a proper, non-trivial ideal. See Example \ref{chain of length n} below. \end{remark} For the next lemma, recall the term operation \[\u\wedge\vec{v}=\vec{v}\boldsymbol{\cdot}(\vec{v}\boldsymbol{\cdot}\u)=\u\boldsymbol{\cdot}(\u\boldsymbol{\cdot}\vec{v})\,.\] \begin{lemma}\label{meet in A^T} For any $\u,\vec{v}\in\mb{A}^T$ and any root-based path $p$, we have \[(\u\wedge\vec{v})_p=\left.\begin{cases}\u_p & \text{ if $\u_p\leq_\ell\vec{v}_p$}\\\vec{v}_p & \text{ if $\u_p>_\ell\vec{v}_p$}\end{cases}\right\}\,.\] Consequently $(\u\wedge\vec{v})_p=\u_p\wedge_\ell\vec{v}_p$, where $\wedge_\ell$ is the meet with respect to the lexicographic order on the set of $|p|$-tuples. \end{lemma} \begin{proof} Assume first that $\u_p\leq_\ell \vec{v}_p$. Then $(\u\boldsymbol{\cdot} \vec{v})_\alpha=0$ for all $\alpha\in p$ and so $(\u\boldsymbol{\cdot} \vec{v})_p=\mb{0}_p$. From this we see that $(\u\wedge \vec{v})_\alpha=\bigl(\u\boldsymbol{\cdot}(\u\boldsymbol{\cdot} \vec{v})\bigr)_\alpha=u_\alpha$ for all $\alpha\in p$, and hence $(\u\wedge\vec{v})_p=\u_p$. Next, assume instead that $\u_p>_\ell\vec{v}_p$. Then there is some vertex $\beta\in p$ such that $\vec{v}_{[\lambda, \beta)}=\u_{[\lambda,\beta)}$ and $v_\beta<u_\beta$. So for vertices $\gamma\in[\lambda,\beta)$ we have $(\u\boldsymbol{\cdot} \vec{v})_\gamma=0$, meaning $\bigl(\u\boldsymbol{\cdot}(\u\boldsymbol{\cdot}\vec{v})\bigr)_\gamma=u_\gamma=v_\gamma$. But then for vertices $\delta\in p\setminus [\lambda,\beta)$ we have $(\u\boldsymbol{\cdot} \vec{v})_\delta=u_\delta-v_\delta$ and so $\bigl(\u\boldsymbol{\cdot}(\u\boldsymbol{\cdot}\vec{v})\bigr)_\delta=u_\delta-(u_\delta-v_\delta)=v_\delta$. Thus, we have $\bigl(\u\boldsymbol{\cdot}(\u\boldsymbol{\cdot}\vec{v})\bigr)_\alpha=v_\alpha$ for all $\alpha\in p$, and $\bigl(\u\boldsymbol{\cdot}(\u\boldsymbol{\cdot}\vec{v})\bigr)_p=\vec{v}_p$ as desired. \end{proof} The next several results describe the ideals of $\mb{A}^T$ and their general behavior. Let $\bb{P}(T)$ denote the set of all root-based paths in $T$. Consider the binary relation $\zeta\subseteq \mb{A}^T\times \bb{P}(T)$ given by \[\zeta =\{\,(\u, p)\in \mb{A}^T\times\bb{P}(T)\mid \u_p=\mb{0}_p\,\}\,.\] This relation induces a Galois connection: \begin{align*} &\text{for $U\subseteq \mb{A}^T$, put } \mc{P}(U)=\{p\in \bb{P}(T)\mid \u_p=\mb{0}_p \text{ for all $\u\in U$}\}\\ &\text{for $R\subseteq \bb{P}(T)$, put } I(R)=\{\u\in \mb{A}^T\mid \u_p=\mb{0}_p \text{ for all $p\in R$}\}\,. \end{align*} Notice that $I(\emptyset)=\mb{A}^T$ and, though it is an abuse of notation, $I(T):=I\bigl(\bb{P}(T)\bigr)=\{\mb{0}\}$. If $R$ is a singleton set, say $R=\{p\}$, we will simply write $I(p)$, and if $p=[\lambda,\alpha]$ we will write $I(\alpha)$. \begin{proposition}\label{I_P is an ideal} For any collection of root-based paths $R$, the set $I(R)$ is an ideal of $\mb{A}^T$. \end{proposition} \begin{proof} Clearly $\mb{0}\in I(R)$. Suppose $\u\boldsymbol{\cdot}\vec{v}\in I(R)$ and $\vec{v}\in I(R)$, and pick $p\in R$. Then $\vec{v}_p=\mb{0}_p$, and we have $\vec{v}_p\leq_\ell\u_p$. This gives $(\u\boldsymbol{\cdot}\vec{v})_\alpha=u_\alpha-v_\alpha=u_\alpha$ for each $\alpha\in p$, and hence $(\u\boldsymbol{\cdot}\vec{v})_p=\u_p$. But $(\u\boldsymbol{\cdot}\vec{v})_p=\mb{0}_p$ since $\u\boldsymbol{\cdot}\vec{v}\in I(R)$, and therefore $\u_p=\mb{0}_p$. Since $p$ was arbitrary, $\u_p=\mb{0}_p$ for all $p\in R$, and $\u\in I(R)$. \end{proof} \begin{theorem}\label{ideals in A^T} For every ideal $J$ of $\mb{A}^T$, we have $J=I(\mc{P}(J))$. In particular, every ideal of $\mb{A}^T$ has the form $I(R)$ for some collection $R$ of root-based paths. \end{theorem} \begin{proof} Let $J$ be an ideal of $\mb{A}^T$. We claim that $J=I\bigl(\mc{P}(J)\bigr)$. The inclusion $\subseteq$ follows from the fact that $\mc{P}(-)$ and $I(-)$ form a Galois connection. For the other inclusion, take $\u\in I\bigl(\mc{P}(J)\bigr)$ with $\u\neq\mb{0}$. Let $\alpha\in V(T)$ be such that $\u_{[\lambda,\alpha)}=\mb{0}_{[\lambda,\alpha)}$ but $u_\alpha\neq 0$; then $u_\alpha>0$. Set $p:=[\lambda,\alpha]$. Then $p\notin\mc{P}\bigl(I\bigl(\mc{P}(J)\bigr)\bigr)=\mc{P}(J)$. So there is $\vec{v}\in J$ such that $\vec{v}_p\neq \mb{0}_p$. Let $\beta\in[\lambda,\alpha]$ be such that $\vec{v}_{[\lambda,\beta)}=\mb{0}_{[\lambda,\beta)}$ and $v_\beta\neq 0$. Note that $v_\beta>0$. Let $k=\bigl\lceil\frac{u_\beta}{v_\beta}\bigr\rceil$ and put $n=k+1$. We claim that $(\u\boldsymbol{\cdot}\vec{v}^n)_q=\mb{0}_q$ for any root-based path $q$ having $p$ as a prefix. If $\beta <_T\alpha$, then $u_\beta=0$ and $n=1$. But $v_\beta>0$ tells us $\u_p<_\ell \vec{v}_p$, meaning $\u_q<_\ell \vec{v}_q$ for any root-based $q$ having $p$ as a prefix, and so $(\u\boldsymbol{\cdot}\vec{v}^n)_q=(\u\boldsymbol{\cdot}\vec{v})_q=\mb{0}_q$ for any such path $q$. So suppose $\beta=\alpha$. We know that $k$ is the smallest positive integer such that $\frac{u_\alpha}{v_\alpha}<k$, or equivalently $u_\alpha-k\,v_\alpha<0$. By the definition of $\boldsymbol{\cdot}$, this means $(\u\boldsymbol{\cdot}\vec{v}^k)_\alpha=0$. Since $v_\alpha=v_\beta>0$, we see that $(\u\boldsymbol{\cdot}\vec{v}^k)_q<_\ell \vec{v}_q$ for any root-based $q$ containing $p$ as a prefix, and therefore $(\u\boldsymbol{\cdot}\vec{v}^n)_q=\bigl((\u\boldsymbol{\cdot}\vec{v}^k)\boldsymbol{\cdot}\vec{v}\bigr)_q=\mb{0}_q$ for any such path $q$. This proves the claim. By definition of $\mb{A}^T$, the element $\u$ has finitely many non-zero vertices, and so in particular there are finitely many vertices $\alpha$ such that $u_\alpha\neq 0$ but $\u_{[\lambda, \alpha)}=\mb{0}_{[\lambda,\alpha)}$. Said differently, there are only finitely many paths along which $\u$ takes on a non-zero value. Enumerate these vertices $\alpha_1, \alpha_2, \ldots, \alpha_m$. By the argument in the preceding paragraphs, for each $\alpha_i$ we can find an element $\vec{v}_i\in J$ and positive integer $l_i$ such that $(\u\boldsymbol{\cdot}\vec{v}_i^{l_i})_q=\mb{0}_q$ for any root-based path $q$ containing $[\lambda, \alpha_i]$ as a prefix. But then \[\bigl(\cdots\bigl((\u\boldsymbol{\cdot}\vec{v}_1^{l_1})\boldsymbol{\cdot}\vec{v}_2^{l_2}\bigr)\boldsymbol{\cdot} \cdots \bigr)\boldsymbol{\cdot}\vec{v}_m^{l_m}=\mb{0}\in J\,.\] Since each $\vec{v}_i\in J$, repeatedly applying the ideal property gives us $\u\in J$ as well. Hence, $I\bigl(\mc{P}(J)\bigr)\subseteq J$, and therefore $I\bigl(\mc{P}(J)\bigr)= J$. \end{proof} For two root-based paths $p$ and $q$ we will write $p\subseteq q$ to indicate that $p$ is a prefix of $q$. \begin{proposition}\label{behavior of ideals in A^T} Let $R, R_1, R_2\subseteq \bb{P}(T)$ and $p, q\in\bb{P}(T)$. \begin{enumerate} \item If $p\in R$, then $I(R)\subseteq I(p)$. \item $I(R)=\bigcap_{p\in R} I(p)$. \item If $p\subseteq q$, then $I(q)\subseteq I(p)$. \item $I(p)\vee I(q)=I(p\cap q)$, and consequently $I(\alpha)\vee I(\beta)=I(\alpha\wedge_T\beta)$ for vertices $\alpha,\beta\in V(T)$. \item $I(R_1)\vee I(R_2)=\bigcap_{p\in R_1}\bigcap_{q\in R_2} I(p\cap q)$. \item $I(R_1)\cap I(R_2)=I(R_1\cup R_2)$. \end{enumerate} \end{proposition} \begin{proof}\hfill\break \noindent(1), (2), and (3) are clear. \noindent(4) From (3), we know $I(p), I(q)\subseteq I(p\cap q)$, and thus $I(p)\vee I(q)\subseteq I(p\cap q)$. For the other inclusion, take $\u\in I(p\cap q)$ so that $\u_{p\cap q}=\mb{0}_{p\cap q}$. Define $\vec{v}\in\mb{A}^T$ by \[v_\alpha=\left.\begin{cases}0 &\text{ if $\alpha\in p$}\\ u_\alpha&\text{ if $\alpha\in T\setminus p$}\end{cases}\right\}\,.\] Note that \begin{align*} (\u\boldsymbol{\cdot}\vec{v})_\alpha &= 0 \text{ for all $\alpha\in T\setminus(p\cap q^c)$}\\ (\u\boldsymbol{\cdot}\vec{v})_\alpha &= u_\alpha \text{ for all $\alpha\in p\cap q^c$}\,, \end{align*} and that $\vec{v}\in I(p)$. Now define $\vec{w}\in\mb{A}^T$ by \[w_\alpha=\left.\begin{cases}0 &\text{ if $\alpha\in q$}\\ (\u\boldsymbol{\cdot}\vec{v})_\alpha&\text{ if $\alpha\in T\setminus q$}\end{cases}\right\}\] and notice that \begin{align*} \bigl((\u\boldsymbol{\cdot}\vec{v})\boldsymbol{\cdot}\vec{w}\bigr)_\alpha &= 0 \text{ for all $\alpha\in T\setminus(p\cap q^c)$}\\ \bigl((\u\boldsymbol{\cdot}\vec{v})\boldsymbol{\cdot}\vec{w})_\alpha &= 0 \text{ for all $\alpha\in p\cap q^c$ since $p\cap q^c\subseteq T\setminus q$}\,, \end{align*} and $\vec{w}\in I(q)$. Hence $(\u\boldsymbol{\cdot}\vec{v})\boldsymbol{\cdot}\vec{w}=\mb{0}$ with $\vec{w},\vec{v}\in I(p)\cup I(q)$, so by Theorem \ref{iseki ideal theorem} we have $\u\in\bigl(I(p)\cup I(q)\bigr]=I(p)\vee I(q)$. Thus $I(p\cap q)\subseteq I(p)\vee I(q)$, and therefore $I(p\cap q)= I(p)\vee I(q)$. \noindent(5) Using (2) and (4), together with the fact that $\id(\mb{A}^T)$ is distributive, we have \begin{align*} I(R_1)\vee I(R_2) =\Bigl(\bigcap_{p\in R_1} I(p)\Bigr)\vee\Bigl(\bigcap_{q\in R_2} I(q)\Bigr) &=\bigcap_{p\in R_1}\bigcap_{q\in R_2} I(p)\vee I(q)\\ &=\bigcap_{p\in R_1}\bigcap_{q\in R_2} I(p\cap q)\,. \end{align*} \noindent(6) Since $R_1,R_2\subseteq R_1\cup R_2$, an easy extension of (1) above gives $I(R_1\cup R_2)\subseteq I(R_1), I(R_2)$. Hence $I(R_1\cup R_2)\subseteq I(R_1)\cap I(R_2)$. For the other inclusion, take $\u\in I(R_1)\cap I(R_2)$ so that $\u_p=\mb{0}_p$ for all $p\in R_1$ and $\u_q=\mb{0}_q$ for all $q\in R_2$. Then $\u_p=\mb{0}_p$ for all $p\in R_1\cup R_2$, so $\u\in I(R_1\cup R_2)$ and the result follows. \end{proof} \begin{theorem}\label{prime ideals in A^T} An ideal of $\mb{A}^T$ is prime if and only if it can be realized as $I(p)$ for a root-based path $p$. \end{theorem} \begin{proof} We first prove $I(p)$ is a prime ideal. By Proposition \ref{I_P is an ideal} we know $I(p)$ is an ideal, and we note that it is a proper ideal since $u_\lambda=0$ for all $\u\in I(p)$. Suppose $\u\wedge\vec{v}\in I(p)$. Then $(\u\wedge\vec{v})_p=\mb{0}_p$ and by Lemma \ref{meet in A^T} we have either $\u_p=\mb{0}_p$ or $\vec{v}_p=\mb{0}_p$. That is, either $\u\in I(p)$ or $\vec{v}\in I(p)$. Assume now that $I$ is a prime ideal. By Theorem \ref{ideals in A^T}, we must have $I=I(R)$ for some collection $R$ of root-based paths. We note $R$ must be non-empty, for otherwise $I=\mb{A}^T$, which is a contradiction since prime ideals are proper. And if $R$ is a singleton set, we're done. So suppose $|R|\geq 2$, and assume that $R$ contains two root-based paths, say $p$ and $q$, neither of which is a prefix of the other. Then we may choose vertices $\alpha\in p\setminus (p\cap q)$ and $\beta\in q\setminus (p\cap q)$. Define $\u\in \mb{A}^T$ to be zero everywhere except $u_\alpha=1$, and similarly define $\vec{v}\in \mb{A}^T$ to be zero everywhere except $v_\beta=1$. Then certainly $\u\wedge\vec{v}=\mb{0}\in I$, but $\u\notin I$ and $\vec{v}\notin I$. Thus, $I$ is not prime, a contradiction. Therefore, the root-based paths appearing in $R$ must form an ascending chain $p_1\subseteq p_2\subseteq p_3\subseteq \cdots$. If this chain is finite, stopping at some $p_n\in R$, then we have $p_i\subseteq p_n$ for all $i$, meaning $I(p_n)\subseteq I(p_i)$ for all $i$, and consequently $I(R)=I(p_n)$ by Proposition \ref{behavior of ideals in A^T}(2). If this chain is infinite, let $p$ represent the infinite-length root-based path carved out by the $p_i$'s. Note that $\u_p=\mb{0}_p$ if and only if $\u_{p_i}=\mb{0}_{p_i}$ for all $i$, and so again $I(R)=I(p)$. \end{proof} \begin{proposition}\label{X(A^T)=P(T)} As posets, $\mrm{X}(\mb{A}^T)\cong \bb{P}(T)^\partial$, where $\bb{P}(T)^\partial$ is the order-dual of $\bb{P}(T)$. \end{proposition} \begin{proof} Define a map $\phi\colon\bb{P}(T)\to \mrm{X}(\mb{A}^T)$ by $\phi(p)=I(p)$. If $p$ and $q$ are two root-based paths with $p\subseteq q$, then $I(p)\supseteq I(q)$ by Proposition \ref{behavior of ideals in A^T}(3), and so $\phi(p)\supseteq \phi(q)$. On the other hand, suppose $\phi(q)\subseteq \phi(p)$. If $p\not\subseteq q$ then there is a vertex $\alpha$ along $p$ which is not on the path $q$. Define $\u\in\mb{A}^T$ to be zero everywhere except $u_\alpha=1$. Then $\u_q=\mb{0}_q$ and so $\u\in I(q)$, but $\u\notin I(p)$, meaning $I(q)\not\subseteq I(p)$, a contradiction. Thus, we must have $p\subseteq q$. The argument above can be modified slightly to show that $\phi$ is injective: if $p\neq q$, then $I(p)\neq I(q)$. Finally, this map is surjective by Theorem \ref{prime ideals in A^T}. Hence, $\phi$ is an order-anti-isomorphism. \end{proof} \begin{corollary}\label{X(A^T) = T^d} If $T$ is a finite rooted tree, then $\mrm{X}(\mb{A}^T)\cong T^\partial$ as posets. \end{corollary} \begin{proof} Suppose $T$ is finite. Then any root-based path in $T$ is finite and hence determined by its terminal vertex. Define $\psi\colon T\to \bb{P}(T)$ by $\psi(\alpha)=[\lambda, \alpha]$. That this is a bijection is straightforward, and certainly $\alpha\leq_T \beta$ if and only if $[\lambda, \alpha]\subseteq [\lambda, \beta]$, meaning $\psi$ is an order-isomorphism. But then $T^\partial \cong \bb{P}(T)^\partial\cong \mrm{X}(\mb{A}^T)$ by Proposition \ref{X(A^T)=P(T)} above. \end{proof} \subsection{Examples} \begin{example}\label{chain of length n} Let $\ch_n$ denote the chain of length $n-1$ viewed as a rooted tree. So $\ch_n$ has $n$ vertices. The algebra $\mb{A}^{\ch_n}$ is a cBCK-chain, so the ideals are linearly ordered. That is, the ideal lattice $\id(\mb{A}^{\ch_n})$ is itself a chain and the prime ideals are exactly the proper ideals. From Theorem \ref{ideals in A^T} we see that $\mb{A}^{\ch_n}$ has $n+1$ ideals, and thus it has $n$ prime ideals. That is, the chain $\mrm{X}(\mb{A}^{\ch_n})$ is isomorphic to the $n$-element chain, $\mb{n}$, which we could also see immediately from Corollary \ref{X(A^T) = T^d}. \end{example} \begin{example}\label{countable chain} Let $\ch_\infty$ denote a rooted tree that is a countably infinite chain. As in the previous example, the algebra $\mb{A}^{\ch_\infty}$ is a cBCK-chain and therefore $\id(\mb{A}^{\ch_\infty})$ is itself a chain and $\mrm{X}(\mb{A}^{\ch_\infty})=\id(\mb{A}^{\ch_\infty})\setminus \{\mb{A}^{\ch_\infty}\}$. For a root-based path $p$ in $\ch_\infty$, let $\ell(p)$ denote the length of $p$. Let $\bb{N}_0^\infty=\bb{N}_0\cup\{\infty\}$, where $k<\infty$ for all $k\in\bb{N}_0$, and note that $\ell\colon \bb{P}(\ch_\infty)\to \bb{N}_0^\infty$ is an order-isomorphism. Hence, by Proposition \ref{X(A^T)=P(T)} we have $\mrm{X}(\mb{A}^{\ch_\infty})\cong (\bb{N}_0^\infty)^\partial$ as posets. This algebra has the peculiar property that $\id(\mb{A}^{\ch_\infty})\cong (\bb{N}_0^\infty)^\partial$ as well. \end{example} In the following examples, we adopt the notation $I_R$ in place of $I(R)$ for $R\subseteq \bb{P}(T)$. In particular, for an interval $[\lambda, \alpha]$, the ideal $I(\alpha)$ will be denoted $I_\alpha$. \begin{example}\label{T_2} Figure \ref{fig:tree2} shows a tree we will call $T_2$ and the Hasse diagram for the ideals of $\mb{A}^{T_2}$, with $\mrm{X}(\mb{A}^{T_2})$ indicated in red, obtained by applying Theorem \ref{ideals in A^T} and Proposition \ref{prime ideals in A^T}. \begin{figure}[h] \centering \begin{tikzpicture} \filldraw (0,0) circle (2pt); \filldraw (-.75,1) circle (2pt); \filldraw (.75,1) circle (2pt); \draw [-] (-.75,1) -- (0,0); \draw [-] (.75,1) -- (0,0); \node at (0,-.3) {\small $\lambda$}; \node at (-.75, 1.3) {\small $\alpha_1$}; \node at (.75, 1.3) {\small $\alpha_2$}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture} \filldraw (0,0) circle (2pt); \filldraw[red] (-.75,1) circle (2pt); \filldraw[red] (.75,1) circle (2pt); \filldraw[red] (0,2) circle (2pt); \filldraw (0,3) circle (2pt); \draw [-] (-.75,1) -- (0,0); \draw [-] (.75,1) -- (0,0); \draw [-,red] (-.75,1) -- (0,2); \draw [-,red] (.75,1) -- (0,2); \draw [-] (0,2) -- (0,3); \node at (0,-.4) {\small $\{\mb{0}\}$}; \node at (-1.2, 1) {\small $I_{\alpha_1}$}; \node at (1.2, 1) {\small $I_{\alpha_2}$}; \node at (.35,2.1) {\small $I_\lambda$}; \node at (0, 3.3) {\small $\mb{A}^{T_2}$}; \end{tikzpicture} \caption{The tree $T_2$ and $\id(\mb{A}^{T_2})$}\label{fig:tree2} \end{figure} Similarly, consider $T_3$ as in Figure \ref{fig:tree3}. For notational brevity, let $I_j=I_{\alpha_j}$ and $I_{jk}=I_{\alpha_j}\cap I_{\alpha_k}$. Similar computations give the Hasse diagram for $\id(\mb{A}^{T_3})$, where again $\mrm{X}(\mb{A}^{T_3})$ is indicated in red. \begin{figure}[h] \centering \begin{tikzpicture} \filldraw (0,0) circle (2pt); \filldraw (0,1) circle (2pt); \filldraw (-1,1) circle (2pt); \filldraw (1,1) circle (2pt); \draw [-] (-1,1) -- (0,0); \draw [-] (1,1) -- (0,0); \draw [-] (0,1) -- (0,0); \node at (0,-.3) {\small $\lambda$}; \node at (-1, 1.3) {\small $\alpha_1$}; \node at (0,1.3) {\small $\alpha_2$}; \node at (1, 1.3) {\small $\alpha_3$}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture}[scale=1.2] \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (min) at (0,0) {}; \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (b) at (0,1) {}; \node[circle,fill=red,inner sep=0pt,minimum size=4pt] (e) at (0,2) {}; \node[circle,fill=red,inner sep=0pt,minimum size=4pt] (g) at (0,3) {}; \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (max) at (0,4) {}; \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (a) at (-1,1) {}; \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (c) at (1,1) {}; \node[circle,fill=red,inner sep=0pt,minimum size=4pt] (d) at (-1,2) {}; \node[circle,fill=red,inner sep=0pt,minimum size=4pt] (f) at (1,2) {}; \draw (d) -- (a) -- (min) -- (b); \draw (f) -- (c) -- (min); \draw (a) -- (e) -- (c); \draw[preaction={draw=white, -,line width=6pt}] (d) -- (b) -- (f); \draw (g) -- (max); \draw[-,red] (d)--(g) -- (f); \draw[-,red] (g)--(e); \node at (0,-.4) {\small $\{\mb{0}\}$}; \node at (-1.2, .8) {\small $I_{12}$}; \node at (1.3, .8) {\small $I_{23}$}; \node at (.35,.8) {\small $I_{13}$}; \node at (-1.25, 2) {\small $I_1$}; \node at (1.25, 2) {\small $I_3$}; \node at (.35, 2) {\small $I_2$}; \node at (.35,3) {\small $I_\lambda$}; \node at (0,4.3) {\small $\mb{A}^{T_3}$}; \end{tikzpicture} \caption{The tree $T_3$ and $\id(\mb{A}^{T_3})$}\label{fig:tree3} \end{figure} \end{example} \begin{example}\label{A^H} Figure \ref{fig:H} shows a tree with height 2; call this tree $H$. \begin{figure}[h] \centering \begin{tikzpicture} \filldraw (0,0) circle (2pt); \filldraw (.5,1) circle (2pt); \filldraw (-.5,1) circle (2pt); \filldraw (0,2) circle (2pt); \filldraw (1,2) circle (2pt); \draw [-] (.5,1) -- (0,0); \draw [-] (-.5,1) -- (0,0); \draw [-] (.5,1) -- (0,2); \draw [-] (.5,1) -- (1,2); \node at (0,-.3) {\small $\lambda$}; \node at (-.8, 1) {\small $\alpha$}; \node at (.8, 1) {\small $\beta$}; \node at (0, 2.3) {\small $\gamma$}; \node at (1, 2.3) {\small $\delta$}; \end{tikzpicture} \caption{The tree $H$} \end{figure} As in the above examples, Theorem \ref{ideals in A^T} tells us that the ideals of $\mb{A}^H$ are essentially determined by non-empty subsets of $V(T)$, but in this example we also have $\beta\in[\lambda,\gamma]\cap[\lambda,\delta]$. This reduces some of the possibilities for ideals. Similar to the previous example, write $I_{xy}$ to mean $I_x\cap I_y$. The Hasse diagram for $\id(\mb{A}^H)$ is shown in Figure \ref{fig:idAH}, where again $\mrm{X}(\mb{A}^H)$ is indicated in red. \begin{figure} \centering \begin{tikzpicture}[scale=1.2] \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (min) at (0,0) {}; \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (b) at (0,1) {}; \node[circle,fill=red,inner sep=0pt,minimum size=4pt] (e) at (0,2) {}; \node[circle,fill=red,inner sep=0pt,minimum size=4pt] (g) at (0,3) {}; \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (a) at (-1,1) {}; \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (c) at (1,1) {}; \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (d) at (-1,2) {}; \node[circle,fill=red,inner sep=0pt,minimum size=4pt] (f) at (1,2) {}; \node[circle,fill=red,inner sep=0pt,minimum size=4pt] (i) at (-1,3) {}; \node[circle,fill=red,inner sep=0pt,minimum size=4pt] (j) at (-.5,3.5) {}; \node[circle,fill=black,inner sep=0pt,minimum size=4pt] (k) at (-.5,4.3) {}; \draw (d) -- (a) -- (min) -- (b); \draw (f) -- (c) -- (min); \draw (a) -- (e) -- (c); \draw[preaction={draw=white, -,line width=6pt}] (d) -- (b) -- (f); \draw[-] (d)--(g); \draw[-,red] (g)--(e); \draw[-,red] (g)--(f); \draw[-,red] (g)--(j); \draw[-,red] (i)--(j); \draw[-] (j)--(k); \draw[-] (i)--(d); \node at (0,-.4) {\small $\{\mb{0}\}$}; \node at (-1.2, .8) {\small $I_{\alpha\gamma}$}; \node at (-1.25, 3) {\small $I_{\alpha}$}; \node at (-.2, 3.5) {\small $I_{\lambda}$}; \node at (1.3, .8) {\small $I_{\gamma\delta}$}; \node at (.35,.8) {\small $I_{\alpha\delta}$}; \node at (-1.35, 2) {\small $I_{\alpha\beta}$}; \node at (1.25, 2) {\small $I_\delta$}; \node at (.35, 2) {\small $I_\gamma$}; \node at (.35,3) {\small $I_\beta$}; \node at (-.5,4.6) {\small $\mb{A}^H$}; \end{tikzpicture} \caption{The ideals of $\mb{A}^H$}\label{fig:idAH} \end{figure} \end{example} \subsection{The algebras $\mb{A}^{T_n}$} Let $T_n$ denote the rooted tree of height one with $n$ leaves as shown in Figure \ref{fig:tree n}. \begin{figure}[b] \centering \begin{tikzpicture} \filldraw (0,0) circle (2pt); \filldraw (-1.5,1) circle (2pt); \filldraw (-.5,1) circle (2pt); \filldraw (.2,1) circle (.5pt); \filldraw (.5,1) circle (.5pt); \filldraw (.8,1) circle (.5pt); \filldraw (1.5,1) circle (2pt); \draw [-] (-1.5,1) -- (0,0); \draw [-] (-.5,1) -- (0,0); \draw [-] (1.5,1) -- (0,0); \node at (0,-.3) {\small $\lambda$}; \node at (-1.5, 1.3) {\small $\alpha_1$}; \node at (-.5, 1.3) {\small $\alpha_2$}; \node at (1.5, 1.3) {\small $\alpha_n$}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture} \filldraw (0,1) circle (2pt); \filldraw (-1.5,0) circle (2pt); \filldraw (-.5,0) circle (2pt); \filldraw (.2,0) circle (.5pt); \filldraw (.5,0) circle (.5pt); \filldraw (.8,0) circle (.5pt); \filldraw (1.5,0) circle (2pt); \draw [-] (-1.5,0) -- (0,1); \draw [-] (-.5,0) -- (0,1); \draw [-] (1.5,0) -- (0,1); \node at (0,1.3) {\small $[n]$}; \node at (-1.5, -.3) {\small $\{1\}^c$}; \node at (-.5, -.3) {\small $\{2\}^c$}; \node at (1.5, -.3) {\small $\{n\}^c$}; \end{tikzpicture} \caption{The tree $T_n$ and the poset $\text{MI}(\overline{\bb{B}}_n)$}\label{fig:tree n} \end{figure} Our objective is to generalize the observations made in the examples of \ref{T_2}. To that end, let $\bb{B}_n$ denote the powerset of $\{1,2,\ldots, n\}$; this is the unique (up to isomorphism) finite Boolean algebra with $n$ atoms. We will let $\overline{\bb{B}}_n$ denote $\bb{B}_n\oplus \mb{1}$, which is $\bb{B}_n$ with a new top element $\mb{1}$ adjoined. So $S<\mb{1}$ for all $S\in \bb{B}_n$. The poset of meet-irreducible elements of $\overline{\bb{B}}_n$ is also shown in Figure \ref{fig:tree n}. \begin{remark} In the literature, pseudocomplemented lattices are sometimes referred to as p-algebras. Theorem 2 of Lakser's paper \cite{lakser71} characterizes algebras of the form $\overline{\bb{B}}_n$ for $n\in\bb{N}$ as precisely the finite subdirectly irreducible distributive p-algebras. \end{remark} For a lattice $\mb{D}$, let $\text{MI}(\mb{D})$ denote the poset of meet-irreducible elements in $\mb{D}$. It is a theorem of Birkhoff that a finite poset $(P,\leq)$ uniquely determines (up to isomorphism) a finite distributive lattice $\mb{D}$ such that $P\cong \text{MI}(\mb{D})$ as posets; see \cite{birkhoff67}. \begin{theorem}\label{ideals of A^T_n form a p-alg} As lattices, $\id(\mb{A}^{T_n})\cong \overline{\bb{B}}_n$. \end{theorem} \begin{proof} Since $T_n$ is a finite rooted tree we have $\mrm{X}(\mb{A}^{T_n})\cong T_n^\partial$ as posets by Corollary \ref{X(A^T) = T^d}, and so $\text{MI}(\overline{\bb{B}}_n)\cong T_n^\partial\cong \mrm{X}(\mb{A}^{T_n})$. However, $\mrm{X}(\mb{A}^{T_n})$ is precisely the set of meet-irreducible elements of $\id(\mb{A}^T)$, and we know that $\id(\mb{A}^T)$ is a distributive lattice. By Birkhoff's theorem we therefore have $\id(\mb{A}^{T_n})\cong \overline{\bb{B}}_n$ \end{proof} \section{Spectra} In this section we consider spectra of cBCK-algebras. Pa\l asinski first explored a topological representation for cBCK-algebras in \cite{palasinski82}. Hoo and Murty defined the spectrum of a bounded cBCK-algebra in \cite{hoo87}; the topological space they define is not the same as Pa\l asinski's. The spectrum as defined by Hoo and Murty became the standard. Later Meng and Jun proved in \cite{MJ98} that the spectrum of a bounded cBCK-algebra is a spectral space. Aslam, Deeba, and Thaheem studied spectra of cBCK-algebras both with and without the assumption boundedness in \cite{ADT93}. The present work should be viewed as a continuation of the work in \cite{ADT93}, as well as a generalization of the work in \cite{MJ98}. In particular, we will show that the spectrum of any cBCK-algebra is a locally compact generalized spectral space, with compactness if and only if the algebra is finitely generated as an ideal. We go on to show that when the algebra is involutory, whether bounded or not, the spectrum is a Priestley space. Let $\mb{A}$ be a cBCK-algebra. For $S\subseteq\mb{A}$, define \[\sigma(S)=\{P\in\mrm{X}(\mb{A})\mid S\not\subseteq P\}\,.\] We will write $\sigma(a)$ for $\sigma(\{a\})$. It is straightforward to show that, for any $S\subseteq \mb{A}$, we have $\sigma(S)=\sigma\bigl((S]\bigr)$. In particular $\sigma(a)=\sigma\bigl((a]\bigr)$. Another computation gives $\sigma(a)\cap\sigma(b) = \sigma(a\wedge b)$ for $a,b\in \mb{A}$. The collection \[\mc{T}(\mb{A})=\{\sigma(I)\mid I\in\id(\mb{A})\}\] is a topology on $\mrm{X}(\mb{A})$, and the set \[\mc{T}_0(\mb{A})=\{\sigma(a)\mid a\in\mb{A}\}\] is a basis. There are several proofs of this in the literature, but we point the reader to Proposition 3.1 of \cite{ADT93}. \begin{remark}\label{sigma is isom} In \cite{ADT93} it is also shown that the map $\sigma:\id(\mb{A})\to\mc{T}(\mb{A})$ is a lattice isomorphism. This gives an alternate proof that $\mt{cBCK}$ is a congruence-distributive variety since we now have $\mc{T}(\mb{A})\cong\id(\mb{A})\cong\con(\mb{A})$ and any topology forms a distributive lattice.\end{remark} \begin{definition} The space $\bigl(\mrm{X}(\mb{A})\,,\, \mc{T}(\mb{A})\bigr)$ is the \textit{spectrum} of $\mb{A}$.\end{definition} For a topological space $X$, denote the collection of compact open subsets of $X$ by $\K(X)$. We say a topological space $X$ is \textit{quasi-sober} if every non-empty irreducible closed subset is the closure of a point, and \textit{sober} if that point is unique. It is well-known that a space is sober if and only if it is both $T_0$ and quasi-sober. A topological space $X$ is called a \textit{spectral space} if $X$ is homeomorphic to the spectrum of some commutative ring. Hochster provided the following characterization of spectral spaces in his PhD thesis. \begin{theorem}[\cite{hochster69}]\label{hochster} A space $X$ is spectral if and only if the following conditions are satisfied: \begin{enumerate} \item[(H1)] $X$ is compact \item[(H2)] $X$ is $T_0$ \item[(H3)] $\K(X)$ is a basis and closed under finite intersections \item[(H4)] $X$ is quasi-sober. \end{enumerate} \end{theorem} \begin{remark} The term \textit{multiplicative basis} is sometimes used when a space satisfies (H3). We will use this terminology. \end{remark} \begin{remark} As mentioned at the beginning of this section, Meng and Jun proved in \cite{MJ98} that $\mrm{X}(\mb{A})$ is a spectral space for any \textit{bounded} cBCK-algebra $\mb{A}$. Confusingly, in their paper they use the term ``Stone space'' instead of ``spectral space.'' This use of terminology seems to come from Balbes and Dwinger's text \cite{balbes11}. In the literature, ``Stone space'' typically refers to a topological space which is compact, Hausdorff, and totally disconnected. The spectrum of a commutative ring $R$ -- by definition a spectral space -- is rarely Hausdorff: the closed points in $\text{Spec}(R)$ with respect to the Zariski topology are precisely the maximal ideals of $R$. The same is true for the spectrum of a cBCK-algebra as well. Despite this bit of confusion, the theorem of Meng and Jun points at a connection between bounded cBCK-algebras and commutative rings. Namely, if $\mb{A}$ is a bounded cBCK-algebra then $\mrm{X}(\mb{A})\simeq \text{Spec}(R)$ for some commutative ring $R$. In general, constructing such a ring $R$ is a rather complicated process. We refer the reader to Hochster's original 1969 paper \cite{hochster69}, the papers by Lewis \cite{lewis73} and Ershov \cite{ershov05} which discuss alternate constructions in the finite setting, or the very readable thesis by Tedd \cite{tedd16} which compares and generalizes the various constructions. \end{remark} Let us note here that $\sigma(a)$ is compact open in $\mrm{X}(\mb{A})$ for any $a\in \mb{A}$. We point to Corollary 4 of \cite{MJ98} for the proof, which does not require $\mb{A}$ to be bounded. So if $\mb{A}$ is bounded with upper bound 1, say, then $\sigma(1)=\mrm{X}(\mb{A})$ since any ideal containing 1 cannot be proper. In this case, $\mrm{X}(\mb{A})$ is compact. What happens to the spectrum $\mrm{X}(\mb{A})$ when $\mb{A}$ is not assumed to be bounded? We will see in Example \ref{noncompact example} that compactness can fail. On the other hand boundedness is not necessary for compactness: any finite spectrum $\mrm{X}(\mb{A})$ is trivially compact whether $\mb{A}$ is bounded or not. If nothing else, local compactness is always guaranteed. \begin{corollary}\label{X is LC} For any cBCK-algebra $\mb{A}$, the spectrum $\mrm{X}(\mb{A})$ is locally compact. \end{corollary} \begin{proof} Every prime ideal is contained in $\sigma(a)$ for some $a$ since these sets form a basis, and the $\sigma(a)$'s are compact. \end{proof} Here we provide necessary and sufficient conditions for compactness of $\mrm{X}(\mb{A})$. \begin{theorem}\label{cpct iff} The space $\mrm{X}(\mb{A})$ is compact if and only if $\mb{A}$ is finitely generated as an ideal. \end{theorem} \begin{proof} Suppose $\mrm{X}(\mb{A})$ is compact. Since $\mc{T}_0=\{\sigma(a)\mid a\in\mb{A}\}$ is a basis for $\mrm{X}(\mb{A})$, we can write $\mrm{X}(\mb{A})=\bigcup_{a\in A}\sigma(a)$. By compactness we must have $\mrm{X}(\mb{A})=\bigcup_{i=1}^k\sigma(u_i)$ for some elements $u_1,\ldots, u_k\in A$. We notice that no prime ideal can contain all the $u_i$'s, for otherwise $P\notin \sigma(u_i)$ for every $i$; this is a contradiction since $\mrm{X}(\mb{A})=\bigcup_{i=1}^k\sigma(u_i)$. In particular, since maximal ideals are prime, no maximal ideal can contain the set $\{u_1, \ldots, u_k\}$. Thus the smallest ideal containing all of the $u_i$'s is $\mb{A}$. That is, $\mb{A}=(u_1, \ldots, u_k]$. Conversely, assume $\mb{A}$ is finitely generated as an ideal. Then $\mb{A}=(u_1, \ldots, u_k]$ for some elements $u_1, \ldots, u_k\in A$. Because prime ideals are proper, we must have $\{u_1,\ldots, u_k\}\not\subseteq P$ for all $P\in\mrm{X}(\mb{A})$. This means that for each prime ideal $P$ there is some $i\in\{1,\ldots, k\}$ such that $P\in\sigma(u_i)$, and hence $\mrm{X}(\mb{A})=\bigcup_{i=1}^k\sigma(u_i)$. We know each $\sigma(u_i)$ is compact, and so $\mrm{X}(\mb{A})$ is a finite union of compact sets. \end{proof} \begin{definition} A topological space $X$ is a \textit{generalized spectral space} if it satisfies (H2)-(H4).\end{definition} So a generalized spectral space which happens to be compact is a spectral space. We will prove that $\mrm{X}(\mb{A})$ is a generalized spectral space in steps. \begin{lemma}\label{X(A) is T_0} $\mrm{X}(\mb{A})$ is $T_0$.\end{lemma} \begin{proof} Take $P,Q\in\mrm{X}(\mb{A})$ with $P\neq Q$. Without loss of generality there is some $a\in P$ such that $a\notin Q$. Then $Q\in\sigma(a)$ and $P\notin \sigma(a)$, so $\sigma(a)$ is an open set separating $P$ and $Q$. \end{proof} \begin{lemma}\label{cpct opens form basis} The compact open sets $\K\mrm{X}(\mb{A})$ form a multiplicative basis for $\mrm{X}(\mb{A})$, so $\mrm{X}(\mb{A})$ satisfies (H3). \end{lemma} \begin{proof} We know $\mc{T}_0=\{\sigma(a)\mid a\in \mb{A}\}$ is a basis for $\mrm{X}(\mb{A})$ consisting of compact subsets. Let $\overline{\mc{T}}_0$ be the closure of $\mc{T}_0$ under finite unions; a moment's thought gives $\overline{\mc{T}}_0=\K\mrm{X}(\mb{A})$, and this is certainly a basis for the same topology. Take $U,V\in\K\mrm{X}(\mb{A})$. Then \begin{align*} U\cap V=\Bigl(\bigcup_{i=1}^n\sigma(a_i)\Bigr)\cap\Bigl(\bigcup_{j=1}^m\sigma(b_j)\Bigr) &=\bigcup_{i=1}^n\bigcup_{j=1}^m\sigma(a_i)\cap\sigma(b_j)\\ &=\bigcup_{i=1}^n\bigcup_{j=1}^m\sigma(a_i\wedge b_j)\in\K\mrm{X}(\mb{A})\,, \end{align*} and thus $\overline{\mc{T}}_0=\K\mrm{X}(\mb{A})$ is a multiplicative basis. \end{proof} For $I\in\id(\mb{A})$, define $V(I):=\{Q\in\mrm{X}(\mb{A})\mid I\subseteq Q\}=\sigma(I)^c$. It is straightforward to check that $V$ is order-reversing and that $V(I\cap J)=V(I)\cup V(J)$ for any ideals $I$ and $J$. \begin{lemma}\label{X(A) is sober} Every irreducible closed set in $\mrm{X}(\mb{A})$ has the form $V(P)$ for some $P\in\mrm{X}(\mb{A})$, and $V(P)$ is the closure of $\{P\}$. Consequently, $\mrm{X}(\mb{A})$ is quasi-sober, satisfying (H4). \end{lemma} \begin{proof} Let $P\in\mrm{X}(\mb{A})$. Note that $V(P)=\sigma(P)^c$, so $V(P)$ is closed. Next, suppose $V(P)=C\cup D$ for two proper closed subsets $C,D \subsetneq V(P)$. Since $C$ and $D$ are closed, there are ideals $I,J\in\id(\mb{A})$ such that $C=\sigma(I)^c$ and $D=\sigma(J)^c$. Thus, \[\sigma(P)^c=V(P)=\sigma(I)^c\cup\sigma(J)^c=\bigl(\sigma(I)\cap\sigma(J)\bigr)^c=\sigma(I\cap J)^c\] which implies that $\sigma(P)=\sigma(I\cap J)$. But by Remark \ref{sigma is isom}, $\sigma$ is injective, so $P=I\cap J$. Since $P$ is prime it is irreducible, so either $P=I$ or $P=J$. Without loss of generality, assume $P=I$. Then $V(P)=V(I)=\sigma(I)^c=C$, which is a contradiction. Hence, we cannot write $V(P)$ as a union of proper closed subsets, meaning $V(P)$ is irreducible. On the other hand, assume $C$ is an irreducible closed subset of $\mrm{X}(\mb{A})$. Since $C$ is closed we have $C=\sigma(I)^c=V(I)$ for some ideal $I$. We claim that $I$ is prime. Suppose not. Then there are ideals $J_1$ and $J_2$ such that $J_1\cap J_2=I$ but $J_1\neq I$ and $J_2\neq I$. Then \[V(I)=V(J_1\cap J_2)=V(J_1)\cup V(J_2)\,.\] Suppose $V(I)=V(J_1)$. Pa\l asinski proved in \cite{palasinski82(2)} that any ideal is equal to the intersection of the prime ideals containing it; from this we obtain \[I=\bigcap_{P\in V(I)}P=\bigcap_{Q\in V(J_1)}Q=J_1\,.\] This is a contradiction, and a similar analysis holds for $J_2$. But $J_1\cap J_2\subseteq J_1$ means $V(J_1)\subseteq V(J_1\cap J_2)=V(I)$, so we must have $V(J_1)\subsetneq V(I)$. Similarly, $V(J_2)\subsetneq V(I)$. Hence, if $I$ is not prime, we can decompose $C=V(I)=V(J_1)\cup V(J_2)$ into a union of proper closed subsets, a contradiction. Therefore, any irreducible closed subset has the form $V(P)$ for some $P\in\mrm{X}(\mb{A})$. Lastly we show that $V(P)=\overline{\{P\}}$. Suppose $P\in C$ where $C$ is a closed set; there is some ideal $I$ such that $C=\sigma(I)^c=V(I)$. So $P\in V(I)$, meaning $I\subseteq P$, and thus $V(P)\subseteq V(I)=C$. Hence, $V(P)$ is the smallest closed set containing $P$, meaning $V(P)=\overline{\{P\}}$. \end{proof} \begin{theorem}\label{X(A) is LC gspec} For any cBCK-algebra $\mb{A}$, the spectrum $\mrm{X}(\mb{A})$ is a locally compact generalized spectral space.\end{theorem} \begin{proof} Combine Corollary \ref{X is LC} and Lemmas \ref{X(A) is T_0}, \ref{cpct opens form basis}, and \ref{X(A) is sober}. \end{proof} We recall that a \textit{Priestley space} is an ordered topological space $(X,\leq, \mc{T})$ that is compact and satisfies the following separation property (PSA): if $x\not\leq y$, there exists a clopen up-set $U$ such that $x\in U$ but $y\notin U$. Notice that (PSA) implies that $X$ is Hausdorff. It is a theorem (see Theorem 4.2 of \cite{johnstone82}) that the following conditions are equivalent: \begin{enumerate} \item $X$ is Hausdorff, sober, and $\K(X)$ is a multiplicative basis. \item $X$ is compact, Hausdorff, and totally disconnected (that is, $X$ is a Stone space). \end{enumerate} Thus, if a space $X$ satisfies (PSA), is sober, and $\K(X)$ is a multiplicative basis, then $X$ is a Priestley space. In \cite{ADT93}, the authors showed that if $\mb{A}$ is involutory then $P\in\sigma(I)$ if and only if $P\notin\sigma(I^\ast)$ for any $I\in\id(\mb{A})$. Said another way, if $\mb{A}$ is involutory, then $\sigma(I)=V(I^\ast)=\sigma(I^\ast)^c$ for any ideal $I$ of $\mb{A}$. \begin{lemma}\label{invol imply sigma is clopen upset} Let $\mb{A}$ be involutory. Then $\sigma(I)$ is a clopen up-set in $\mrm{X}(\mb{A})$ for all $I\in\id(\mb{A})$. \end{lemma} \begin{proof} Take $I\in\id(\mb{A})$. Then $\sigma(I^\ast)^c=\sigma(I)$. Since $I^\ast$ is an ideal, $\sigma(I^\ast)$ is an open set and hence $\sigma(I)$ is clopen. Now take $P, Q\in\mrm{X}(\mb{A})$ with $P\subseteq Q$. Assume $P\in\sigma(I)$. Then \begin{align*} P\notin\sigma(I^\ast) &\Longrightarrow I^\ast\subseteq P\subseteq Q\\ &\Longrightarrow Q\notin\sigma(I^\ast)\\ &\Longrightarrow Q\in\sigma(I) \end{align*} which shows that $\sigma(I)$ is an up-set. \end{proof} \begin{theorem}\label{invol implies pries} If $\mb{A}$ is involutory, then $\mrm{X}(\mb{A})$ is a Priestley space. \end{theorem} \begin{proof} We first show $\mrm{X}(\mb{A})$ satisfies (PSA). Suppose we have $P,Q\in\mrm{X}(\mb{A})$ with $P\not\subseteq Q$. Then there is some $a\in \mb{A}$ such that $a\in P$ and $a\notin Q$, which implies $P\notin \sigma(a)$ and $Q\in\sigma(a)$. Therefore we have $P\in\sigma\bigl(\{a\}^\ast\bigr)$ and $Q\notin\sigma\bigl(\{a\}^\ast\bigr)$. Lemma \ref{invol imply sigma is clopen upset} tells us $\sigma\bigl(\{a\}^\ast\bigr)$ is a clopen up-set. Hence, $\mrm{X}(\mb{A})$ satisfies (PSA). We have already seen in Lemmas \ref{X(A) is sober} and \ref{cpct opens form basis} that $\mrm{X}(\mb{A})$ is sober and that $\K\mrm{X}(\mb{A})$ is a multiplicative basis. So $\mrm{X}(\mb{A})$ is a Priestley space. \end{proof} We now collect some properties of spectra for any cBCK-algebra $\mb{A}$. Let $\mrm{M}(\mb{A})$ denote the maximal spectrum of $\mb{A}$; that is, the subset of $\mrm{X}(\mb{A})$ consisting of maximal ideals, endowed with the subspace topology. A cBCK-algebra $\mb{A}$ is \textit{directed} if each pair of elements has an upper bound. \begin{proposition}\label{small properties} Let $\mb{A}$ be a cBCK-algebra. \begin{enumerate} \item $\mrm{X}(\mb{A})$ is Hausdorff if and only if each $\sigma(a)$ is clopen. \item If each $\sigma(a)$ is clopen, then $\mrm{X}(\mb{A})$ is zero-dimensional, totally disconnected, and completely regular. \item A point $\{M\}\in\mrm{X}(\mb{A})$ is closed if and only if $M$ is a maximal ideal. \item If every prime ideal of $\mb{A}$ is maximal, then $\mrm{X}(\mb{A})$ is Hausdorff \item If $\mb{A}$ is directed, then $\mrm{M}(\mb{A})$ is Hausdorff. \item If $\mb{A}$ is directed, then $\mrm{X}(\mb{A})$ is Hausdorff if and only if $\mrm{X}(\mb{A})=\mrm{M}(\mb{A})$. \end{enumerate} \end{proposition} \begin{proof}(1) Assume each $\sigma(a)$ is clopen. Take $P,Q\in\mrm{X}(\mb{A})$ with $P\neq Q$. Without loss of generality, there is some $x\in P$ such that $x\notin Q$. Then $Q\in\sigma(x)$ while $P\in\sigma(x)^c$. The sets $\sigma(x)$ and $\sigma(x)^c$ are obviously disjoint and open, so $\mrm{X}(\mb{A})$ is Hausdorff. On the other hand, assume $\mrm{X}(\mb{A})$ is Hausdorff and take $a\in \mb{A}$. We know $\sigma(a)$ is compact open, but compact subsets of Hausdorff spaces are closed. So $\sigma(a)$ is clopen. (2) Assume each $\sigma(a)$ is clopen. Then $\mc{T}_0$ is a basis of clopen sets which means $\mrm{X}(\mb{A})$ is zero-dimensional. From Lemma \ref{X(A) is T_0} and (1) above, we see that $\mrm{X}(\mb{A})$ is locally compact Hausdorff. Any locally compact Hausdorff space is completely regular. Lastly, for locally compact Hausdorff spaces, being zero-dimensional is equivalent to being totally disconnected. (3) We know maximal ideals are prime, and we saw in Lemma \ref{X(A) is sober} that $\overline{\{P\}}=V(P)=\{Q\in\mrm{X}(\mb{A})\mid P\subseteq Q\}$ for any prime ideal $P$. Thus, if $M$ is a maximal ideal, we have $\overline{\{M\}}=\{M\}$ by maximality. On the other hand, suppose $\{P\}\subseteq \mrm{X}(\mb{A})$ is closed. We claim that $P$ is a maximal ideal. To see this, suppose there is an ideal $M$ with $P\subseteq M$ and let $C$ be a closed set containing $P$. Then $C=\sigma(J)^c$ for some ideal $J$, and so $P\in \sigma(J)^c$. Thus, $J\subseteq P\subseteq M$, meaning $M\in\sigma(J)^c=C$ as well. But this implies $M\in\bigcap \{C\subseteq\mrm{X}(\mb{A})\mid P\in C, \text{ $C$ is closed}\}=\{P\}$. Thus, $P=M$ and $P$ is maximal. (4) Assume $\mrm{X}(\mb{A})=\mrm{M}(\mb{A})$. By (3), then, it follows that every point is closed and hence $\mrm{X}(\mb{A})$ is Hausdorff. (5) Assume $\mb{A}$ is directed. Then $(x\cdot y)\wedge(y\boldsymbol{\cdot} x)=0$ for all $x,y\in \mb{A}$ (see Lemma 5.2.2 and Theorem 5.2.28 of \cite{DP00}). Now take $M_1,M_2\in\mrm{M}(\mb{A})$ with $M_1\neq M_2$. By maximality we have $M_1\not\subseteq M_2$ and $M_2\not\subseteq M_1$. Pick $a\in M_1\setminus M_2$ and $b\in M_2\setminus M_1$. We know $a\boldsymbol{\cdot} b\leq a$ and $b\boldsymbol{\cdot} a\leq b$, and since ideals are down-sets this means $a\boldsymbol{\cdot} b\in M_1$ and $b\boldsymbol{\cdot} a\in M_2$. If $a\boldsymbol{\cdot} b\in M_2$, then we must have $a\in M_2$, a contradiction. So $a\boldsymbol{\cdot} b\in M_1\setminus M_2$, and similarly $b\boldsymbol{\cdot} a\in M_2\setminus M_1$. Thus, $M_1\in \sigma(b\boldsymbol{\cdot} a)$ and $M_2\in \sigma(a\boldsymbol{\cdot} b)$. But notice that \[\sigma(a\boldsymbol{\cdot} b)\cap \sigma(b\boldsymbol{\cdot} a)=\sigma\bigl((a\boldsymbol{\cdot} b)\wedge(b\boldsymbol{\cdot} a)\bigr)=\sigma(0)=\emptyset\,,\] so $\sigma(a\boldsymbol{\cdot} b)$ and $\sigma(b\boldsymbol{\cdot} a)$ are disjoint open sets separating $M_1$ and $M_2$. Hence, $\mrm{M}(\mb{A})$ is Hausdorff. (6) This follows from the fact that $(x\cdot y)\wedge(y\boldsymbol{\cdot} x)=0$ for all $x,y\in \mb{A}$ together with Theorem 6.1.7 of \cite{DP00}. \end{proof} We close this section with a theorem that will allow us to more effectively compute spectra for certain algebras, as well as one example computation to illustrate the idea. \begin{theorem}\label{X(U) is coproduct} Let $\mb{U}=\bigcupdot_{\lambda\in\Lambda}\mb{A}_\lambda$, where $\Lambda$ is any indexing set. Then $\mrm{X}(\mb{U})$ is homeomorphic to $\bigsqcup_{\lambda\in\Lambda}\mrm{X}(\mb{A}_\lambda)$ with the disjoint union topology. That is, \[\mrm{X}\Bigl(\bigcupdot_{\lambda\in\Lambda}\mb{A}_\lambda\Bigr)\simeq\bigsqcup_{\lambda\in\Lambda}\mrm{X}(\mb{A}_\lambda)\,.\] \end{theorem} \begin{proof} For any $P\in\bigsqcup_{\lambda\in\Lambda}\mrm{X}(\mb{A}_\lambda)$, we know $P$ is a prime ideal of $\mb{A}_\mu$ for some $\mu$. Define \[\Phi\colon\bigsqcup_{\lambda\in\Lambda}\mrm{X}(\mb{A}_\lambda)\to\mrm{X}\Bigl(\bigcupdot_{\lambda\in\Lambda}\mb{A}_\lambda\Bigr)\] by $\Phi(P)=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\mu}^P$. By Theorem \ref{primes_in_union} this map is surjective. If $\Phi(P)=\Phi(Q)$, then $\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\alpha}^P=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\beta}^Q$, where $P\in\mrm{X}(\mb{A}_\alpha)$ and $Q\in\mrm{X}(\mb{A}_\beta)$. This is only possible if $\alpha=\beta$ and $P=Q$. So $\Phi$ is a bijection. We now show that $\Phi$ is continuous. Let $\sigma_\mb{U}(a)$, for $a\in \mb{U}$, be a basic open set in $\mrm{X}(\mb{U})$. Since $a\in \mb{U}$, we have $a\in\mb{A}_\beta$ for some $\beta\in\Lambda$. We claim that $\Phi^{-1}\bigl(\sigma_\mb{U}(a)\bigr)=\sigma_{\mb{A}_\beta}(a)$. Take $P\in\Phi^{-1}\bigl(\sigma_\mb{U}(a)\bigr)$, so $\Phi(P)\in\sigma_\mb{U}(a)$. By definition of $\Phi$ we have $\Phi(P)=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\mu}^P$ for some index $\mu$. Since $a\in\mb{A}_\beta$ but $a\notin \Phi(P)=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\mu}^P$, it follows that $\mu=\beta$; that is, $P\in\mrm{X}(\mb{A}_\beta)$ and $\Phi(P)=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\beta}^P$. So then $a\notin P$ which means $P\in\sigma_{\mb{A}_\beta}(a)$. For the other inclusion, take $P\in\sigma_{\mb{A}_\beta}(a)$. So $P\in\mrm{X}(\mb{A}_\beta)$ and $a\notin P$. Then $\Phi(P)=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\beta}^P$ and $a\notin \Phi(P)$ as well, so $\Phi(P)\in\sigma_\mb{U}(a)$. Hence $P\in\Phi^{-1}\bigl(\sigma_\mb{U}(a)\bigr)$, and therefore $\Phi^{-1}\bigl(\sigma_\mb{U}(a)\bigr)=\sigma_{\mb{A}_\beta}(a)$ as claimed. Next we note that $\sigma_{\mb{A}_\beta}(a)$ is open in the disjoint union topology on $\bigsqcup_{\lambda\in\Lambda}\mrm{X}(\mb{A}_\lambda)$. To see this, notice \[\sigma_{\mb{A}_\beta}(a)\cap\mrm{X}(\mb{A}_\lambda)=\left.\begin{cases}\emptyset & \text{if $\beta\neq\lambda$}\\\sigma_{\mb{A}_\beta}(a) & \text{if $\beta=\lambda$}\end{cases}\right\}\,,\] meaning $\sigma_{\mb{A}_\beta}(a)\cap\mrm{X}(\mb{A}_\lambda)$ is open in $\mrm{X}(\mb{A}_\lambda)$ for all $\lambda$, and therefore $\sigma_{\mb{A}_\beta}(a)$ is open in the disjoint union topology. Thus, the preimage under $\Phi$ of any basic open set of $\mrm{X}(\mb{U})$ is open in $\bigsqcup_{\lambda\in\Lambda}\mrm{X}(\mb{A}_\lambda)$, meaning $\Phi$ is continuous. We show that $\Phi$ is an open map. Let $V\subseteq \bigsqcup_{\lambda\in\Lambda}\mrm{X}(\mb{A}_\lambda)$ be open. Then $V\cap\mrm{X}(\mb{A}_\lambda)$ is open in $\mrm{X}(\mb{A}_\lambda)$ for each $\lambda$. Thus, for each $\lambda$, we have $V\cap\mrm{X}(\mb{A}_\lambda)=\sigma_{\mb{A}_\lambda}(I_\lambda)$ for some $I_\lambda\in\id(\mb{A}_\lambda)$. Put $I=\bigcupdot_{\lambda\in\Lambda}I_\lambda$. We will prove that $\Phi(V)=\sigma_\mb{U}(I)$. Take $Q\in\Phi(V)$; so $Q$ is a prime ideal in $\bigcupdot_{\lambda\in\Lambda}\mb{A}_\lambda$. Thus, $Q=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\mu}^P$ for some index $\mu$ and some $P\in\mrm{X}(\mb{A}_\mu)$. So $\Phi(P)=Q$ meaning $P\in V$, and so $P\in V\cap\mrm{X}(\mb{A}_\mu)=\sigma_{\mb{A}_\mu}(I_\mu)$. Thus, $I_\mu\not\subseteq P$ which implies that $I=\bigcupdot_{\lambda\in\Lambda}I_\lambda\not\subseteq \bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\mu}^P=Q$. Hence $Q\in\sigma_\mb{U}(I)$. On the other hand, take $Q\in\sigma_\mb{U}(I)$. Then $I\not\subseteq Q$ and $Q=\bigcupdot_{\lambda\in\Lambda}\mb{A}_{\lambda,\mu}^P$ for some $\mu$ and some $P\in\mrm{X}(\mb{A}_\mu)$. It follows that $I_\mu\not\subseteq P$, and so $P\in\sigma_{\mb{A}_\mu}(I_\mu)=V\cap\mrm{X}(\mb{A}_\mu)$. In particular, $P\in V$ and $\Phi(P)=Q$, so $Q\in\Phi(V)$. Therefore, $\Phi(V)=\sigma_\mb{U}(I)$ which tells us $\Phi$ is open map. Since $\Phi$ is an open continuous bijection, it is a homeomorphism. \end{proof} Let $\mb{A}$ be a simple cBCK-algebra. Then $\mb{A}$ has exactly two ideals; the trivial ideal is automatically maximal, and therefore prime. Thus, for any simple cBCK-algebra the spectrum $\mrm{X}(\mb{A})$ is a one-point space, and so the spectra of any two simple cBCK-algebras are homeomorphic. For example, $\mrm{X}(\mb{C}_1)\simeq\mrm{X}(\mb{N}_0)$, despite the algebras $\mb{C}_1$ and $\mb{N}_0$ being different order types! \begin{example}\label{noncompact example} Consider $\mb{U}=\bigcupdot_{\lambda\in\Lambda}\mb{C}_1$. By Theorem \ref{X(U) is coproduct} we have $\mrm{X}(\mb{U})\simeq \bigsqcup_{\lambda\in\Lambda}\mrm{X}(\mb{C}_1)$ with the disjoint union topology. But $\mrm{X}(\mb{C}_1)$ is a one-point space, so $\mrm{X}(\mb{U})$ is a discrete space with cardinality $|\Lambda|$. Thus, $\mrm{X}(\mb{U})$ is not compact unless $\Lambda$ is finite, and a subset $V$ of $\mrm{X}(\mb{U})$ is compact if and only if $V$ is finite. Let us label the atoms of $\mb{U}$ by $\{a_\lambda\}_{\lambda\in \Lambda}$. Applying Theorem \ref{primes_in_union}, every prime ideal of $\mb{U}$ is of the form $P_\lambda=\mb{U}\setminus\{a_\lambda\}$ and the basis for our topology is \[\mc{T}_0=\bigl\{\,\sigma(0)\,\bigr\}\cup\bigl\{\,\sigma(a_\lambda)\,\bigr\}_{\lambda\in \Lambda}=\bigl\{\,\emptyset\,\bigr\}\cup\bigl\{\,\{P_\lambda\}\,\bigr\}_{\lambda\in \Lambda}\,.\] Hence, $\mc{T}(\mb{U})$ is lattice-isomorphic to $\mt{P}(\Lambda)$, the powerset of $\Lambda$, and $\K\mrm{X}\bigl(\mb{U})$ is lattice-isomorphic to $\mt{P}_{\text{fin}}(\Lambda)$, the lattice of finite subsets of $\Lambda$. In particular, we have $\K\mrm{X}\bigl(\bigcupdot_{i=1}^n\mb{C}_1\bigr)$ is lattice-isomorphic to $\bb{B}_n$, the finite Boolean algebra of order $2^n$. \end{example} \section{Functoriality of $\K$ and $\mrm{X}$} Let $X$ and $Y$ be generalized spectral spaces. A map $g\colon X\to Y$ is a \textit{spectral map} if the inverse image of every compact open subset of $Y$ is compact open in $X$. That is, $g^{-1}\bigl(\K(Y)\bigr)\subseteq \K(X)$. Since $\K(Y)$ forms a basis for the topology on $Y$, any spectral map is continuous. Let $\mt{GSpec}$ denote the category of generalized spectral spaces with spectral maps as morphisms. Similarly, let $\mt{Spec}$ denote the category of spectral spaces with spectral maps as morphisms. We have already seen that the spectrum of any cBCK-algebra is a generalized spectral space. That is, $\mrm{X}(\mb{A})\in \mt{GSpec}$ for any $\mb{A}\in\mt{cBCK}$. Suppose $f\colon \mb{A}\to \mb{B}$ is a BCK-homomorphism. For any prime ideal $Q$ in $\mb{B}$, the preimage $f^{-1}(Q)$ is a prime ideal in $\mb{A}$. So we define $\mrm{X}(f)\colon \mrm{X}(\mb{B})\to \mrm{X}(\mb{A})$ by $\mrm{X}(f)(Q)=f^{-1}(Q)$ for $Q\in \mrm{X}(\mb{B})$. It is straightforward to check that $\mrm{X}:\mt{cBCK}\to \mt{GSpec}$ is a contravariant functor; for a proof see Proposition 4.1.1 of \cite{evans20} This functor cannot be fully faithful since fully faithful functors are injective on objects. We saw earlier that $\mrm{X}(\mb{C}_1)\simeq \mrm{X}(\mb{N}_0)$ -- they are both one-point spaces -- but certainly $\mb{C}_1\not\cong \mb{N}_0$. This has the further implication that $\mrm{X}$ does not yield a dual equivalence of categories. In this way, our situation is similar to that of commutative rings. The functor $\text{Spec}\colon\mt{CommRing}\to\mt{Spec}$ which associates a commutative ring to its prime spectrum is also not a dual equivalence. We contrast this with the well-known dual equivalences between Boolean algebras and Stone space, $\mt{BA}\cong^\partial\mt{Stone}$, or between bounded distributive lattices and Priestley spaces, $\mt{BDL}\cong^\partial \mt{Pries}$ (see \cite{stone36} and \cite{pries70}). It is also well-known that the category of Priestley spaces is equivalent to (in fact, isomorphic to) the category of spectral spaces (see \cite{cornish75}). So by the preceding paragraph we have $\mt{BDL}\cong^\partial\mt{Pries}\cong\mt{Spec}$. Hence, there is a dual equivalence $\mt{BDL}\cong^\partial\mt{Spec}$ between bound\-ed distributive lattices and spectral spaces. This duality extends to a duality between the category $\mt{DL}_0$ of distributive lattices with 0, where the morphisms are 0-preserving lattice homomorphisms with cofinal range, and the category $\mt{GSpec}$ of generalized spectral spaces (see \cite{stone38}). This duality sends a distributive lattice $\mb{D}$ to its prime spectrum $\text{Spec}(\mb{D})$ endowed with the Zariski topology in one direction, and it sends a generalized spectral space $Y$ to the lattice $\K(Y)$ of compact open subsets in the other. Our situation is diagrammed below. \begin{center} \begin{tikzcd} \mt{cBCK} \arrow[r, "\mrm{X}"] & \mt{GSpec} \arrow[r, "\K", bend left] & \mt{DL}_0 \arrow[l, "\text{Spec}", bend left] \end{tikzcd} \end{center} It would be very nice to have an explicit characterization of the image of $\mrm{X}$ in $\mt{GSpec}$, but this is a difficult problem. On the other hand, the dual equivalence between $\mt{DL}_0$ and $\mt{GSpec}$ (as well as $\mt{BDL}$ and $\mt{Spec}$) has been studied and may be a fruitful way of gaining leverage on the situation. In particular, it would be interesting to know what distributive lattices lie in the image of the composite functor $\K\mrm{X}$. We give partial results in Theorem \ref{culmination}, Corollary \ref{KX yields products}, and Corollary \ref{specific products} For a topological space $X$ we use the notation $\mc{T}_X$ for the lattice of open sets. It is known (see \cite{CGL99}, Proposition 1.2) that two generalized spectral spaces $X$ and $Y$ are homeomorphic if and only if the lattices $\mc{T}_X$ and $\mc{T}_Y$ are isomorphic. \begin{corollary}\label{homeo criterion} Let $\mb{A}$ be a cBCK-algebra and $Y$ a generalized spectral space. Then $Y\simeq \mrm{X}(\mb{A})$ if and only if $\mc{T}_Y\cong\id(\mb{A})$ as lattices, if and only if $\K(Y)\cong \K\mrm{X}(\mb{A})$. \end{corollary} \begin{proof} This follows from the preceding paragraph together with Remark \ref{sigma is isom} which tells us $\mc{T}_{\mrm{X}(\mb{A})}=\mc{T}(\mb{A})\cong\id(\mb{A})$ as lattices. The second equivalence follows because the compact open sets form a basis for the topology. \end{proof} While this does give a small inroad for understanding the image of $\mrm{X}$ in $\mt{GSpec}$, it represents more a change of perspective rather than a reduction in difficulty. For now we will focus our attention on a particular class of topological spaces. \subsection{Noetherian spaces} \begin{definition} A topological space $Y$ is \textit{Noetherian} if it satisfies the descending chain condition on closed subsets: for any sequence $C_1\supseteq C_2\supseteq\cdots$ of closed subsets of $Y$, there is some $n\in\bb{N}$ such that $C_{n+k}=C_{n}$ for all $k>0$. \end{definition} This is equivalent to saying that $Y$ satisfies the ascending chain condition on open subsets. One can show (see \cite{hartshorne77}, Exercise 2.13) that a space $Y$ is Noetherian if and only if every open set is compact. We also note that any finite topological space is obviously Noetherian. \begin{proposition}\label{these spectra are Noetherian} If $T$ is a finite rooted tree, the space $\mrm{X}(\mb{A}^T)$ is Noetherian. For any $n\in\bb{N}$, the space $\mrm{X}\bigl(\bigcupdot_{i=1}^n\mb{C}_1\bigr)$ is Noetherian. \end{proposition} \begin{proof} These spaces are finite.\end{proof} \begin{theorem}\label{infinite chain noetherian} The space $\mrm{X}(\mb{A}^{\ch_\infty})$ is Noetherian. \end{theorem} \begin{proof} Let $V$ be an open set and let $\mc{U}=\{U_j\}_{j\in J}$ be an open cover of $V$. Since $\bb{P}(\ch_\infty)$ is linearly ordered, every open set of $\mrm{X}(\mb{A}^{\ch_\infty})$ has the form $\sigma\bigl(I(p')\bigr)$ for some root-based path $p'$. Thus there is a root-based path $p$ such that $V=\sigma\bigl(I(p)\bigr)$, and for each $j$ there is a root-based path $q_j$ such that $U_j=\sigma\bigl(I(q_j)\bigr)$. Let $q$ be the shortest-length path among the $q_j$'s. Then $q\subseteq q_j$ for all $j\in J$, which means $I(q_j)\subseteq I(q)$ for all $j\in J$. Hence, $V\subseteq \bigcup_{j\in J}U_j\subseteq I(q)$, and therefore $V$ is compact. Since every open set is compact, $\mrm{X}(\mb{A}^{\ch_\infty})$ is Noetherian. \end{proof} The usefulness of an algebra having a Noetherian spectrum is the following: \begin{lemma}\label{KX(A) = id(A)} If $\mb{A}\in\mt{cBCK}$ is such that $\mrm{X}(\mb{A})$ is Noetherian, then $\K\mrm{X}(\mb{A})\cong \id(\mb{A})$. \end{lemma} \begin{proof} Suppose $\mrm{X}(\mb{A})$ is Noetherian. Then every open set is compact; that is $\K\mrm{X}(\mb{A})=\mc{T}(\mb{A})$, but then $\K\mrm{X}(\mb{A})=\mc{T}(\mb{A})\cong \id(\mb{A})$ by Remark \ref{sigma is isom}. \end{proof} Thus, under the right circumstances, we can find lattices in the image $\K\mrm{X}$ by finding lattices that occur as the lattice of ideals of a cBCK-algebra. Of course the assumption that $\mrm{X}(\mb{A})$ be Noetherian is rather strong, and not all spectra are Noetherian. For example, we saw that $\mrm{X}\bigl(\bigcupdot_{\lambda\in \Lambda}\mb{C}_1\bigr)$ is discrete with cardinality $|\Lambda|$, so if $\Lambda$ is infinite this spectrum is not compact and cannot be Noetherian. Nevertheless, the above lemma is still a useful tool. \begin{theorem}\label{culmination} The following all lie in the image of $\K\mrm{X}$: \begin{enumerate} \item every distributive lattice $\mb{D}$ such that $\text{MI}(\mb{D})\cong T^\partial$, as posets, for some finite rooted tree $T$. \item every finite chain, \item any countably infinite chain isomorphic to $(\bb{N}_0^\infty)^\partial$, \item the underlying lattice of every finite subdirectly irreducible distributive p-algebra, \item the underlying lattice of every finite Boolean algebra. \end{enumerate} \end{theorem} \begin{proof}\hfill (1) This proof follows the same strategy as the proof of Theorem \ref{ideals of A^T_n form a p-alg}. Let $\mb{D}$ be such a distributive lattice. By Birkhoff's theorem, this lattice is unique up to isomorphism. Since $T$ is finite so too is $\mrm{X}(\mb{A}^T)$, and we already know that $\mrm{X}(\mb{A}^T)\cong T^\partial$ as posets by Corollary \ref{X(A^T) = T^d} so we have $\text{MI}(\mb{D})\cong T^\partial\cong \mrm{X}(\mb{A}^T)$. But we know that $\mrm{X}(\mb{A}^T)=\text{MI}\bigl(\id(\mb{A}^T)\bigr)$, and we know $\id(\mb{A}^T)$ is a distributive lattice. Hence, $\mb{D}\cong \id(\mb{A}^T)$ as lattices by the uniqueness of $\mb{D}$. Further, because $\mrm{X}(\mb{A}^T)$ is finite it is Noetherian as a topological space. Applying Lemma \ref{KX(A) = id(A)} gives $\mb{D}\cong \id(\mb{A}^T)\cong \K\mrm{X}(\mb{A}^T)$. (2) This is a special case of (1) with $\mb{D}\cong \mb{n}$, the $n$-element chain. In this case, $\text{MI}(\mb{n})$ is a chain with $n-1$ elements, which we view as a rooted tree. We therefore have $\mb{n}$ in the image of $\K\mrm{X}$. This could also be seen using Example \ref{chain of length n} together with the fact that $\mrm{X}(\mb{A}^{\ch_{n-1}})$ is Noetherian. (3) Theorem \ref{infinite chain noetherian} shows $\mrm{X}(\mb{A}^{\ch_\infty})$ is Noetherian, and therefore $\K\mrm{X}(\mb{A}^{\ch_\infty})\cong\id(\mb{A}^{\ch_\infty})\cong (\bb{N}_0^\infty)^\partial$, where the last isomorphism was shown in Example \ref{countable chain}. (4) Combine Theorem \ref{ideals of A^T_n form a p-alg}, Proposition \ref{these spectra are Noetherian}, and Lemma \ref{KX(A) = id(A)}. (5) Combine Proposition \ref{these spectra are Noetherian} and Lemma \ref{KX(A) = id(A)}. \end{proof} \begin{remark} We note that the process used in Theorem \ref{culmination}(1) will not yield every finite distributive lattice. Consider the free bounded distributive lattice on two generators $\mb{F}_2$ shown in Figure \ref{fig:F2}, where $\text{MI}(\mb{F}_2)$ is indicated in red. Since the poset of meet-irreducibles is not connected, it does not form a tree. Therefore $\mb{F}_2$ cannot be obtained as $\id(\mb{A}^T)$ for any finite tree $T$. \begin{figure}[h] \centering \begin{tikzpicture} \filldraw[red] (0,0) circle (2pt); \filldraw (0,1) circle (2pt); \filldraw[red] (-1,2) circle (2pt); \filldraw[red] (1,2) circle (2pt); \filldraw[red] (0,3) circle (2pt); \filldraw (0,4) circle (2pt); \draw [-] (0,0) -- (0,1); \draw [-] (0,1) -- (1,2); \draw [-] (0,1) -- (-1,2); \draw [-,red] (1,2) -- (0,3); \draw [-,red] (-1,2) -- (0,3); \draw [-] (0,3) -- (0,4); \node at (0,-.3) {\small $0$}; \node at (.6, .9) {\small $x\wedge y$}; \node at (-1.3, 2) {\small $x$}; \node at (1.3, 2) {\small $y$}; \node at (.6, 3.1) {\small $x\vee y$}; \node at (0, 4.3) {\small $1$}; \end{tikzpicture} \caption{}\label{fig:F2} \end{figure} \end{remark} \subsection{Disjoint union in $\mt{GSpec}$}\label{disjoint union in gspec} Recall that the functor $\K\colon \mt{GSpec}\to\mt{DL}_0$ provides a dual equivalence. From this it follows that $\K$ sends coproducts to products and vice versa. Given a family of generalized spectral spaces $\{X_\lambda\}_{\lambda\in\Lambda}$, the disjoint union $\bigsqcup_{\lambda\in\Lambda}X_\lambda$ with the disjoint union topology is the coproduct in the category $\mt{Top}$ of all topological spaces. Unfortunately it may not be the coproduct in $\mt{GSpec}$, as the next example shows. \begin{example} Suppose $\Lambda$ is an infinite indexing set, each $X_\lambda=\{\ast\}$, the one-point space, and put $Z=\{\ast\}$ as well. The one-point space is a spectral space while the disjoint union $\mc{X}:=\bigsqcup_{\lambda\in\Lambda}X_\lambda$ is a generalized spectral space, see Theorem \ref{disjoint union}. For each $\lambda$ we have a unique spectral map $f_\lambda: X_\lambda\to Z$, and the inclusion maps $\text{incl}_\lambda: X_\lambda\to\mc{X}$ are spectral maps as well. Now consider the following diagram: \begin{center} \begin{tikzcd} & & Z \\ & & \\ X_\lambda \arrow[rr, "\text{incl}_\lambda"'] \arrow[rruu, "f_\lambda"] & & \mc{X} \arrow[uu, "{\exists !\, f\, ?}"', dotted] \end{tikzcd} \end{center} There is exactly one set map $f$ which makes this diagram commute for all $\lambda$, which is $f(x)=\ast$ for all $x\in\mc{X}$. But this map is not a spectral map since $f^{-1}(\{\ast\})=\mc{X}$, which is not compact since $\Lambda$ is infinite. That is, $f$ is not a morphism in $\mt{GSpec}$. In fact, $\hom_{\mt{GSpec}}(\mc{X}, Z)$ is empty! Hence, $\mc{X}$ is not the coproduct of the $X_\lambda$'s in $\mt{GSpec}$. \end{example} However, we will see that for \textit{finite} families in $\mt{GSpec}$, the disjoint union is indeed the coproduct. To do this, we first need to know that the disjoint union of generalized spectral spaces is again generalized spectral. We break the proof into some smaller lemmas. Let $\{X_\lambda\}_{\lambda\in\Lambda}$ be a family of generalized spectral spaces and put $\mc{X}=\bigsqcup_{\lambda\in\Lambda}X_\lambda$, endowed with the disjoint union topology. Open sets in $\mc{X}$ are of the form $\bigsqcup_{\lambda\in\Lambda}U_\lambda$, where $U_\lambda$ is open in $X_\lambda$. We collect here several useful observations; we refer the reader to the text \cite{DST2019} by Dickmann, Schwartz, and Tressl. \begin{enumerate} \item For each $\lambda$, any open set of $X_\lambda$ is open in $\mc{X}$. \item A subset $C\subseteq\mc{X}$ is closed in $\mc{X}$ if and only if $C=\bigsqcup_{\lambda\in\Lambda} C_\lambda$, where each $C_\lambda$ is closed in $X_\lambda$. Consequently, for each $\lambda$, any closed subset of $X_\lambda$ is closed in $\mc{X}$. \item A non-empty subset $C\subseteq\mc{X}$ is irreducible in $\mc{X}$ if and only if there is some index $\lambda\in\Lambda$ such that $C\subseteq X_\lambda$ and $C$ is irreducible in $X_\lambda$. \item The compact open subsets of $\mc{X}$ are of the form $\bigsqcup_{\lambda\in F} V_{\lambda}$, where $V_{\lambda}\in\K(X_{\lambda})$ and $F$ is a finite subset of $\Lambda$. Consequently, for each $\lambda$, any compact open subset of $X_\lambda$ is also compact open in $\mc{X}$. \item A disjoint union of $T_0$ spaces is $T_0$. \end{enumerate} \begin{lemma}\label{basis} The compact open subsets of $\mc{X}$ are a multiplicative basis. \end{lemma} \begin{proof} Let $U=\bigsqcup_{\lambda\in\Lambda}U_\lambda$ be open in $\mc{X}$. Since $\K(X_\lambda)$ is basis for $X_\lambda$, each $U_\lambda$ can be written as a union of elements in $\K(X_\lambda)$, which are elements of $\K(\mc{X})$ by observation (1). Thus we can write $U$ as a union of elements in $\K(\mc{X})$, meaning $\K(\mc{X})$ is a basis for the disjoint union topology. Now take $U,V\in\K(\mc{X})$. By observation (4) we can write $U=\bigsqcup_{\lambda\in F}U_\lambda$ and $V=\bigsqcup_{\mu\in G}V_\mu$ where $F$ and $G$ are finite subsets of $\Lambda$, each $U_\lambda$ is compact open in $X_\lambda$, and each $V_\mu$ is compact open in $X_\mu$. If $\lambda\neq \mu$, then $U_\lambda\cap V_\mu=\emptyset$, and so \begin{align*} U\cap V =\Bigl(\bigsqcup_{\lambda\in F}U_\lambda\Bigr)\cap\Bigl(\bigsqcup_{\mu\in G}V_\mu\Bigr) &=\bigsqcup_{\lambda\in F}\bigsqcup_{\mu\in G}(U_\lambda\cap V_\mu)\\ &=\bigsqcup_{\alpha\in F\cap G}(U_\alpha\cap V_\alpha)\,. \end{align*} For each $\alpha\in F\cap G$ we know $U_\alpha\cap V_\alpha\in \K(X_\alpha)$ since $\K(X_\alpha)$ is a multiplicative basis, which further means $U_\alpha\cap V_\alpha\in\K(\mc{X})$. Lastly we note that $F\cap G$ is a finite subset of $\Lambda$, and we have $U\cap V\in \K(\mc{X})$. \end{proof} \begin{lemma}\label{union is sober} The space $\mc{X}$ is quasi-sober. \end{lemma} \begin{proof} This follows from observation (3) and the fact that each $X_\lambda$ is closed in $\mc{X}$. \end{proof} \begin{theorem}\label{disjoint union} The disjoint union $\mc{X}=\bigsqcup_{\lambda\in\Lambda}X_\lambda$ of a family of generalized spectral spaces with the disjoint union topology is also a generalized spectral space. \end{theorem} \begin{proof} Combine observation (5), Lemma \ref{basis}, and Lemma \ref{union is sober}. \end{proof} \begin{theorem}\label{finite coproduct in gspec} Let $\{X_j\}_{j=1}^n$ be a finite family of generalized spectral spaces. The coproduct of this family in $\mt{GSpec}$ is the disjoint union $\bigsqcup_{j=1}^nX_j$. \end{theorem} \begin{proof} Suppose we have a generalized spectral space $Z$ equipped with spectral maps $f_j:X_j\to Z$ for $j=1,\ldots, n$. Consider the diagram \begin{center} \begin{tikzcd} & & Z \\ & & \\ X_j \arrow[rr, "\text{incl}_j"'] \arrow[rruu, "f_j"] & & \bigsqcup_{j=1}^nX_j \arrow[uu, "{\exists !\, f\, ?}"', dotted] \end{tikzcd} \end{center} For $x\in \bigsqcup_{j=1}^nX_j$, we must have $x\in X_j$ for some index $j$, and so we define $f(x):=f_j(x)$. This is the unique map making the above diagram commute. We show $f$ is a spectral map. Let $V$ be a compact open subset of $Z$. A computation gives $f^{-1}(V)=\bigsqcup_{j=1}^n f_j^{-1}(V)$. Since each $f_j$ is a spectral map, each preimage $f_j^{-1}(V)$ is compact open in $X_j$. Thus, applying observation (4) we see that $f^{-1}(V)$ is compact open in $\bigsqcup_{j=1}^nX_j$. So $f$ is a spectral map. \end{proof} \subsection{$\K\mrm{X}$ obtains certain products} Combining results from previous sections, we can now state the following. \begin{lemma}\label{products} For a finite family $\{\mb{A}_j\}_{j=1}^n$ of cBCK-algebras we have lattice-isomorphisms \[\K\mrm{X}\Bigl(\bigcupdot_{j=1}^n\mb{A}_j\Bigr)\cong \K\Bigl(\bigsqcup_{j=1}^n\mrm{X}(\mb{A}_j)\Bigr)\cong \prod_{j=1}^n \K\mrm{X}(\mb{A}_j)\,.\] \end{lemma} \begin{proof} The first isomorphism follows from Theorem \ref{X(U) is coproduct}. The second follows from the fact that $\K$ sends coproducts in $\mt{GSpec}$ to products in $\mt{DL}_0$, together with Theorem \ref{finite coproduct in gspec}. \end{proof} \begin{corollary}\label{KX yields products} Let $\{\mb{D}_j\}_{j=1}^n$ be a finite family of distributive lattices such that, for each $j\in\{1,\ldots, n\}$, we have $\mb{D}_j\cong \K\mrm{X}(\mb{A}_j)$ for some cBCK-algebra $\mb{A}_j$. Then the lattice $\P=\prod_{j=1}^n\mb{D}_j$ is in the image of $\K\mrm{X}$. \end{corollary} \begin{proof} Let $\mb{U}=\bigcupdot_{j=1}^n\mb{A}_j$. Then by Lemma \ref{products} we have \[\K\mrm{X}(\mb{U})\cong \prod_{j=1}^n\K\mrm{X}(\mb{A}_j)\cong \prod_{j=1}^n \mb{D}_j\cong \P\,.\] \end{proof} \begin{corollary}\label{specific products} Let $\{\mb{D}_j\}_{j=1}^n$ be a collection of distributive lattices such that each $\mb{D}_j$ is one of the five types from Theorem \ref{culmination}, and let $\P=\prod_{j=1}^n\mb{D}_j$. Then $\P$ is in the image of $\K\mrm{X}$. \end{corollary} For example, pick $n\in\bb{N}$ and let $\mb{D}(n)$ be the divisor lattice of $n$. Since $\mb{D}(n)$ is a finite product of finite chains it is in the image of $\K\mrm{X}$.
proofpile-arXiv_059-15759
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} \textit{Federated Learning (FL)} is proposed as a paradigm that enables distributed clients to collaboratively train a shared model while preserving data privacy \cite{mcmahan2017communication}. Specifically, in each round of federated learning, clients obtain the global model and update it on their own private data to generate the local models, and then the central server aggregates these local models into a new global model. Most of the existing works focus on supervised federated learning in which clients train their local models with supervision. However, the data generated in edge devices are typically unlabeled. Therefore, learning a common representation model for various downstream tasks from decentralized and unlabeled data while keeping private data on devices, i.e. \textit{Federated Unsupervised Representation Learning (FURL)}, remains still an open problem. \begin{figure}[!t] \centering \subfigure[Inconsistency of representation spaces.]{ \begin{minipage}{7cm} \centering \includegraphics[scale=0.21]{fig-1-a.pdf} \end{minipage}} \subfigure[Misalignment of representations.]{ \begin{minipage}{7cm} \centering \includegraphics[scale=0.21]{fig-1-b.pdf} \end{minipage}} \caption{Illustration of challenges in \textit{FURL}: (a) \textit{inconsistency of representation spaces}: data distribution shift among clients causes local models to focus on different categories; and (b) \textit{misalignment of representations}: without unified information, the representation across clients would be misalignment (e.g., rotated by a certain angle). The hyperspheres are representation spaces encoded by different local models in federated learning.} \label{fig-1} \end{figure} It's a natural idea that we can combine federated learning with unsupervised approaches, which means that clients can train their local models via unsupervised methods. There are a lot of highly successful works on unsupervised representation learning. Particularly, contrastive learning methods train models by reducing the distance between representations of positive pairs (e.g., different augmented views of the same image) and increasing the distance between negative pairs (e.g., augmented views from different images), have been outstandingly successful in practice \cite{chen2020simple, oord2018representation, he2020momentum, chen2020improved}. However, their successes highly rely on their abundant data for representation training, for example, contrastive learning methods need a large number of negative samples for training \cite{sohn2016improved, chen2020simple}. Moreover, few of these unsupervised methods take the problem of data distribution shift into account, which is a common practical problem in federated learning. Hence, it's no easy task to combine federated learning with unsupervised approaches for the problem of \textit{FURL}. In federated learning applications, however, the collected data of each client is limited and the data distribution of client might be different from each other \cite{zhao2018federated, sattler2019robust, jeong2018communication, yang2019federated, kairouz2019advances}. Hence, we face the following challenges to combine federated learning and with unsupervised approaches for \textit{FURL}: \begin{itemize} \item \textbf{Inconsistency of representation spaces.} In federated learning, limited data of each client would lead to the variation of data distribution from client to client, resulting in the inconsistency of representation spaces encoded by different local models. For example, as shown in Figure \ref{fig-1}(a), client 1 is with only images of cats and dogs, and client 2 is with only images of cars and planes. Then, the locally trained model on client 1 only encodes a feature space of cats and dogs, failing to map cars or planes to the appropriate representations, and the same goes for trained model on client 2. Intuitively, the performance of the global model aggregated by these inconsistent local models may fall short of expectations. \item \textbf{Misalignment of representations.} Even if the training data of clients are IID and the representation spaces encoded by different local models are consistent, there may be misalignment between representations by the reason of randomness in the training process. For instance, for a given input set, the representations generated by a model are equivalent to the representations generated by another model when rotated by a certain angle, as shown in Figure \ref{fig-1}(b). It should be noted that the misalignment between local models may have drastic detrimental effects on the performance of the aggregated model. \end{itemize} To address these challenges, we propose a contrastive loss-based federated unsupervised representation learning algorithm called FedCA, which consists of two main novel modules: dictionary module for addressing the inconsistency of representation spaces and alignment module for aligning the representations across clients. Specifically, the dictionary module, which is maintained by server, aggregates abundant representations of samples from clients and can be shared to each client for local model optimization. In the alignment module, we first train a base model based on a small public data (e.g., a subset of STL-10 dataset) \cite{coates2011analysis}, then require all local models to mimic the base model such that the representations generated by different local models can be aligned. Overall, in each round, FedCA involves two stages: (i) \textit{clients} train local representation models on their own unlabeled data via contrastive learning with two modules above, and then generate local dictionaries, and (ii) \textit{server} aggregates the trained local models to obtain a shared global model and integrates local dictionaries into a global dictionary. To the best of our knowledge, FedCA is the first algorithm designed for the \textit{FURL} problem. Our experiments show that FedCA has better performance than those naive methods which solely combine federated learning with unsupervised approaches. We believe that FedCA will serve as a critical foundation in this novel and challenging problem. \begin{figure*}[!t] \centering \subfigure[Overview of FedCA.]{ \begin{minipage}[t]{0.27\linewidth} \centering \includegraphics[width=1.9in]{fig-2-a.pdf} \end{minipage}} \subfigure[Local Update of Model.]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=2.7in]{fig-2-b.pdf} \end{minipage}} \subfigure[Local Update of Dictionary.]{ \begin{minipage}[t]{0.23\linewidth} \centering \includegraphics[width=1.7in]{fig-2-c.pdf} \end{minipage}} \caption{Illustrations of FedCA. (a) In each round, clients generate local models and dictionaries, and then server gathers them to obtain global model and dictionary. (b) Clients update local models by contrastive leaning with the dictionary and alignment modules. $x_{other}$ is a sample different from sample $x$, and $x_{alignment}$ is a sample from the additional public dataset for alignment. $f$ is the encoder and $g$ is the projection head. (c) Clients generate local dictionaries via temporal ensembling.} \label{fig-2} \end{figure*} ~\\ \section{Related Work} \subsection{Federated Learning} Federated learning enables distributed clients to train a shared model collaboratively while keep private data on devices \cite{mcmahan2017communication}. Li et al. add a proximal term to the loss function to keep local models close to the global model \cite{li2018federated}. Wang et al. propose a layers-wise federated learning algorithm to deal with permutation invariance of neural network parameters \cite{wang2020federated}. However, the existing works only focus on the consistency of parameters, while we emphasize the consistency of representations in this paper. Some works also focus on reducing the communication of federated learning \cite{konevcny2016federated}. To further protect data privacy of clients, cryptography technologies are applied to federated learning \cite{bonawitz2017practical}. \subsection{Unsupervised Representation Learning} There are two main types of unsupervised learning methods: generative and discriminative. Generative approaches learn representations by generating pixels in the input space \cite{hinton2006reducing, kingma2013auto, radford2015unsupervised}. Discriminative approaches train representation model by performing pretext tasks where the labels are generated for free from unlabelled data \cite{pathak2017curiosity, gidaris2018unsupervised}. Among them, contrastive learning methods achieve excellent performance \cite{chen2020simple, oord2018representation, he2020momentum, chen2020improved}. The contrastive loss is proposed by Hadsell et al. \cite{hadsell2006dimensionality}. Wu et al. propose an unsupervised contrastive learning approach based on a memory bank to learn visual representations \cite{wu2018unsupervised}. Recently, Wang et al. point two key properties, closeness and uniformity, related to the contrastive loss. \cite{wang2020understanding}. Other works also apply contrastive learning to video \cite{sermanet2018time, tian2019contrastive}, NLP \cite{mikolov2013distributed, logeswaran2018efficient, yang2019xlnet}, audio \cite{baevski2020wav2vec} and graph \cite{hassani2020contrastive, qiu2020gcc}. \subsection{Federated Unsupervised Learning} Some concurrent works \cite{Jin2020TowardsUU, van2020towards} also focus on federated learning from unlabeled data. Different from these works that all simply combine federated learning with unsupervised approaches, we explore and identify the main challenges in federated unsupervised representation learning and design an algorithm to deal with these challenges. ~\\ \section{Preliminary} In this section, we discuss the primitives needed for our approach. \subsection{Federated Learning} In federated learning, each client $u \in U$ has a private dataset $D_u$ of training samples with $D = \cup_{u \in U} D_u $ and our aim is to train a shared model while keeping private data on devices. There are a lot of algorithms designed for aggregation in federated learning \cite{wang2020federated, li2018federated}. Here, for simplicity, we introduce a standard and popular aggregation method named FedAvg \cite{mcmahan2017communication}. In each round of FedAvg, the server randomly selects a subset of clients $U^t \subseteq U$ and each client $u \in U^t$ locally updates the model $f$ with parameters $\theta_t$ on dataset $D_u$ via stochastic gradient descent rule: \begin{equation} \theta^{t+1}_u \gets \theta^t - \eta\bigtriangledown\mathcal{L}_f(D_u, \theta^t) \end{equation} where $\eta$ is the stepsize. Then the server gathers parameters of local models $\{\theta^{t+1}_u | u \in U^t\}$ and aggregate these local models via weighted average to generate a new global model: \begin{equation} \theta^{t+1} \gets \sum_{u \in U^t} \frac{|D_u|}{\sum_{i \in U^t}|D_i| } \theta^{t+1}_u \end{equation} The training process above is repeated until the global model converges. \subsection{Unsupervised Contrastive Learning} Unsupervised contrastive representation learning methods learn representations from unlabeled data by reducing the distance between representations of positive samples and increasing the distance between representations of negative samples. Among them, SimCLR achieves outstanding performance and can be applied to federated learning easily \cite{chen2020simple}. SimCLR randomly samples a minibatch of $N$ samples and executes twice random data augmentations for each sample to obtain $2N$ views. Typically, the views augmented from the same image are treated as positive samples and the views augmented from different images are treated as negative samples \cite{dosovitskiy2014discriminative}. The loss function for a positive pair of samples $(i, j)$ is defined as: \begin{equation} l_{i, j} = -log\frac{exp(sim(z_i, z_j)/\tau)}{\sum_{k=1}^{2N}\mathbbm{1}_{[k \neq i]}exp(sim(z_i, z_k)/\tau)}, \end{equation} where $\tau$ is temperature and $\mathbbm{1}_{[k \neq i]} = 1$ iff $k \neq i$. The model (consisting of a base encoder network $f$ to extract representation vectors $h$ from augmented views and a projection head $g$ to map representations $h$ to $z$) is trained by minimizing the loss function above. Finally, we use representations $h$ to perform downstream tasks. ~\\ \section{Method} In this section, we analyze two challenges mentioned above and detail the dictionary module and alignment module designed for these challenges. Then we introduce Federated Contrastive Averaging with dictionary and alignment (FedCA) algorithm for \textit{FURL}. \subsection{Dictionary Module for Inconsistency Challenge} \textit{FURL} aims to learn a shared model mapping data to representation vectors such that similar samples are mapped to nearby points in representation space so that the features are well-clustered by classes. However, the presence of Non-IID data presents a great challenge to \textit{FURL}. Since the local dataset $D_u$ of a given client $u$ likely contains samples of only a few classes, the local models may encode inconsistent spaces, causing bad effects on the performance of the aggregated model. To empirically verify this, we visualize the representations of images from CIFAR-10 via T-SNE method. To be specific, we split training data of CIFAR-10 into 5 Non-IID sets and each set consists of 10000 samples from 2 classes. Then FedAvg algorithm is solely combined with unsupervised approach (SimCLR) to learn representations from these subsets. We use the local model in 20th round of the client who only has samples of class 0 and 1 to extract features from test set of CIFAR-10 and visualize the representations after dimensionality reduction by T-SNE, as shown in Figure \ref{fig-3}(a). We find that the scattered representations of samples from class 0 and 1 spread over a very large area of representation space and it is difficult to distinguish samples of class 0 and 1 from others. It suggests that the local model encodes a representation space of samples of class 0 and 1 and it cannot map samples of other classes to the suitable positions. The visualization results support our hypothesis that the representation spaces encoded by different local models are inconsistent in Non-IID setting. \begin{figure}[!t] \centering \subfigure[Vanilla Federated Unsupervised Approach.]{ \begin{minipage}{7cm} \includegraphics[scale=0.20]{fig-3-a.pdf} \end{minipage}} \subfigure[FedCA.]{ \begin{minipage}{7cm} \includegraphics[scale=0.20]{fig-3-b.pdf} \end{minipage}} \caption{T-SNE visualization results of representations on CIFAR10. In federated learning with Non-IID setting, we use the local model of the client who only has samples of class 0 and 1 to generate representations. We compare two methods: (a) FedSimCLR (SimCLR is combined with FedAvg directly) and (b) FedCA (ours). A, B are the regions where representations of samples of class 0, 1 cluster respectively and C is the rest region.} \label{fig-3} \end{figure} We argue that the cause of the inconsistency is that the clients can only use their own data to train the local models but the distribution of data varies from client to client. To address this issue, we design a dictionary module, as shown in Figure 2(b). Specifically, in each communication round, clients use the global model (including the encoder and the projection head) to obtain the normalized projections $\{\tilde{z_i}\}$ of their own samples and send normalized projections to the server along with the trained local models. Then the sever gathers the normalized projections into a shared dictionary. For each client, the global dictionary $\tilde{z}_{dict}$ with $K$ projections is treated as a normalized projection set of negative samples for local contrastive learning. Specifically, in local training process, for a given minibatch $x_{batch}$ with $N$ samples, we randomly augment them to obtain $x_i$, $x_j$ and generate normalized projections $\tilde{z_i}$, $\tilde{z_j}$. Then we calculate \begin{equation} logits_{batch} = \tilde{z_i} \cdot {\tilde{z_j}}^T, \end{equation} \begin{equation} logits_{dict} = \tilde{z_i} \cdot {\tilde{z}_{dict}}^T, \end{equation} \begin{equation} logits_{total} = concat([logits_{batch}, logits_{dict}], dim=1), \end{equation} where $concat()$ denotes concatenation and the size of $logits$ is $N \times (N+K)$. Now we turn the unsupervised problem into a $(N+K)$-classification problem and define \begin{equation} label = [0, 1, 2, ...,N-2, N-1] \end{equation} as a class indicator. Then the loss function is given as \begin{equation} loss_{contrastive} = CE(logits/t, labels), \end{equation} where $CE()$ denotes cross entropy loss and $t$ is temperature term. Note that, in each round, the shared dictionary is generated by global model from the previous round, but the projections of local samples are encoded by current local models. The inconsistencies in representations may affect the function of the dictionary module, especially in Non-IID setting. We use temporal ensembling to alleviate this problem, as shown in Figure \ref{fig-2}(c). To be specific, each client maintains a local ensemble dictionary consisting of projections set $\{Z_i^{t-1}| x_i \in D_u\}$. In each round, client $u$ uses trained local model to obtain projections $\{z_i^{t}| x_i \in D_u\}$ and accumulates it into ensemble dictionary by updating \begin{equation} Z_i^t \leftarrow \alpha Z_i^{t-1} + (1-\alpha) z_i^t, \end{equation} and then normalized ensemble projection is given as \begin{equation} \tilde{z_i^t} = \frac{Z_i^t / (1-{\alpha}^t)}{||Z_i^t / (1-{\alpha}^t)||_2} = \frac{Z_i^t}{||Z_i^t||_2}, \end{equation} where $\alpha \in [0, 1)$ is a momentum parameter and $Z_i^0 = \vec{0}$. We visualize the representations encoded by local model trained via federated contrastive learning with dictionary module in the same setting as vanilla federated unsupervised approach. As shown in Figure \ref{fig-3}(b), we find that the points of class 0 and 1 are clustered in a small subspace of representation space, which means that the dictionary module works well as we expected. \subsection{Alignment Module for Misalignment Challenge} \begin{figure}[t] \centering \subfigure[FedSimCLR.]{ \begin{minipage}[t]{0.48\linewidth} \includegraphics[width=1.55in]{fig-4-a.pdf} \end{minipage}} \subfigure[FedCA.]{ \begin{minipage}[t]{0.48\linewidth} \includegraphics[width=1.55in]{fig-4-b.pdf} \end{minipage}} \caption{Boxplots of angles between representations encoded by local models on CIFAR10 in federated learning with IID setting.} \label{fig-4} \end{figure} Due to the randomness in training process, there might be a certain angle difference between representations generated by two models trained on the same dataset respectively, although these two models encode consistent spaces. The \textit{misalignment of representations} may have an adverse effect on model aggregation. To verify it, we record the angles between normalized representations generated by different local models in federated learning. We split training data of CIFAR-10 into 5 IID sets randomly and each set consists of 10000 samples from all 10 classes. We randomly select 2 local models trained by vanilla federated unsupervised approach (FedSimCLR is used as an example) and use them to obtain normalized representations on testset of CIFAR-10. As shown in Figure \ref{fig-4}(a), there is always a large angle (beyond $20^{\circ}$) difference between representations encoded by the local models in learning process. We introduce an alignment module to tackle this challenge. As shown in Figure \ref{fig-2}(b), we prepare an additional public dataset $D_{align}$ with small size and train a model $g_{align}(f_{align}())$ (called alignment model) on it. The local models are then trained via contrastive loss with a regularization term that replicating outputs of the alignment model on a subset of alignment dataset. For a given client $u$, the loss function is defined as \begin{equation} loss_{align}^h = \sum_{i=1}^{|D_{align}^{sub}|}||h_{align}^i - h_{u}^i||_2^2, \end{equation} \begin{equation} loss_{align}^z = \sum_{i=1}^{|D_{align}^{sub}|}||z_{align}^i - z_{u}^i||_2^2, \end{equation} \begin{equation} loss_{align} = loss_{align}^z + loss_{align}^z, \end{equation} where $h_{align}^i = f_{align}(x^i)$, $z_{align}^i = g_{align}(h_{align}^i)$, $h_u^i = f_u(x^i)$, $z_u^i = g_u(h_u^i)$, $x^i \in D_{align}^{sub} \subseteq D_{align}$. We also calculate the angles between representations of local models trained via federated contrastive learning with alignment module (3200 images sampled from STL-10 randomly are used for alignment) in the same setting as vanilla federated unsupervised approach. As shown in Figure \ref{fig-4}(b), the angles can be controlled within $10^{\circ}$ after 10 training rounds, which suggests that the alignment module can help to align the local models. \subsection{FedCA Algorithm} From the above, the total loss function of local model update is given as \begin{equation} loss = loss_{contrastive} + \beta loss_{align}, \end{equation} where $\beta$ is a scale factor controlling the influence of alignment module. Now we have a complete algorithm named Federated Contrastive Averaging with Dictionary and Alignment (FedCA) which can handle the challenges of \textit{FURL} well, as shown in Figure \ref{fig-2}. \begin{algorithm}[!h] \caption{\textit{Federated Contrastive Averaging with Dictionary and Alignment (FedCA)}.} {\textbf{Require:} The $n$ clients are indexed by $u$; parameters of global model (encoder and projection head) $\theta_t$, parameters of local model $\theta_t^u$, global dictionary $dict_t$, local dictionary $dict_t^u$, the proportion of selected clients $C$, the number of local epochs $E$, local dataset $D_u$, and learning rate $\eta$.} \hspace*{0.02in} {\textbf{Server executes:}} \begin{algorithmic}[1] \State Initialize $\theta_0$ \State Prepare a public dataset $D_{align}$ and an alignment model with parameters $\theta_{align}$ \For{each round $t = 0, 1, 2, ...$} \State $m \leftarrow max(C \cdot n, 1)$ \State $U_t \leftarrow$ (random set of $m$ clients) \For{each client $u \in U_t$ \textbf{in parallel}} \State $\theta_{t+1}^u, dict_{t+1}^u \leftarrow ClientUpdate(u, \theta_t, dict_t)$ \EndFor \State $\theta_{t+1} \leftarrow \sum_{u \in U_t} \frac{|D_u|}{\sum_{i \in U_t}|D_i|} \theta_{t+1}^u$ \State $dict_{t+1} \leftarrow concat([\{dict_{t+1}^{u}|u \in U_t\}], dim=1)$ \EndFor \end{algorithmic} \hspace*{0.02in} {\textbf{ClientUpdate}($u$, $\theta$, $dict$) \textbf{:}} // Run on client $u$ \begin{algorithmic}[1] \For{each local epoch $i$ from $1$ to $E$} \For{batch $b \in D_u$} \State // Update $\theta$ with Eq. (14) \State $\theta \leftarrow \theta - \eta \bigtriangledown\mathcal{L}(\theta; b, dict, D_{align}, \theta_{align})$ \EndFor \EndFor \State Generate $dict^u$ by Eq. (9)(10) \State \Return $\theta$, $dict^u$ \end{algorithmic} \end{algorithm} Algorithm 1 summarizes the proposed approach. \section{Experiments} \textit{FURL} aims to learn a representation model from decentralized and unlabeled data. In this section, we present an empirical study of FedCA. \begin{table*}[t] \centering \fontsize{6.5}{8}\selectfont \label{tab:performance_comparison} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Setting}& \multirow{2}{*}{Method}& \multicolumn{2}{c|}{CIFAR10}&\multicolumn{2}{c|}{CIFAR100}&\multicolumn{2}{c|}{MiniImageNet}\cr\cline{3-8} & &5-layer CNN&ResNet-50&5-layer CNN&ResNet-50&5-layer CNN&ResNet-50\cr \hline \multirow{4}{*}{IID}& FedAE&61.23&65.47&34.07&36.56&28.21&31.97\cr &FedPR&55.75&63.52&29.74&30.89&24.76&26.63\cr &FedSimCLR&61.62&68.10&34.18&39.75&29.84&32.18\cr &FedCA (ours)&{\bf 64.87}&{\bf 71.25}&{\bf 39.47}&{\bf 43.30}&{\bf 35.27}&{\bf 37.12}\cr\hline \multirow{4}{*}{Non-IID}& FedAE&60.14&63.74&33.94&37.27&29.00&30.44\cr &FedPR&54.94&60.31&30.70&32.39&24.74&25.91\cr &FedSimCLR&59.21&64.06&33.63&38.70&29.24&30.47\cr &FedCA (ours)&{\bf 63.02}&{\bf 68.01}&{\bf 38.94}&{\bf 42.34}&{\bf 34.95}&{\bf 35.01}\cr \hline \end{tabular} \caption{Top-1 accuracies (\%) of algorithms for \textit{FURL} on linear evaluation} \label{table-1} \end{table*} \subsection{Experimental Setup} \subsubsection{Baselines.} \textit{AutoEncoder} is a generative method to learn representations in an unsupervised manner by generating from the reduced encoding a representation as close as possible to its original input \cite{hinton2006reducing}. \textit{Predicting Rotation} is one of the proxy tasks of self-supervised learning by rotating samples by random multiples of 90 degrees and predicting the degrees of rotations \cite{gidaris2018unsupervised}. We solely combine FedAvg with AutoEncoder (named \textit{FedAE}), Predicting Rotation (name \textit{FedPR}) and SimCLR (name \textit{FedSimCLR}) respectively and use them as baselines for \textit{FURL}. \subsubsection{Dataset.} The CIFAR-10/CIFAR100 dataset \cite{krizhevsky2009learning} consists of 60000 32x32 colour images in 10/100 classes, with 6000/600 images per class, and there are 50000 training images and 10000 test images in CIFAR-10 and CIFAR100. The MiniImageNet dataset \cite{vinyals2016matching, deng2009imagenet} is extracted from the ImageNet dataset and consists of 60000 84x84 colour images in 100 classes, we split it into a training dataset with 50000 samples and a test dataset with 10000 samples. We implement FedCA and the baseline methods on three datasets above in PyTorch \cite{paszke2019pytorch}. \subsubsection{Federated Setting.} We deploy our experiments under a simulated federated learning environment where we set a centralized node as the server and 5 distributed nodes as clients. The number of local epochs $E$ is 5 and in each round all of the clients obtain global model and execute local training, i.e., the proportion of selected clients $C$ is $1$. For each dataset, we consider two federated settings: IID and Non-IID. Each client randomly samples 10000 images from the entire training dataset in IID setting, while in Non-IID setting, samples are split to clients by class, which means that each client has 10000 samples of 2/20/20 classes of CIFAR10/CIFAR100/MiniImageNet. \begin{table*}[h] \centering \fontsize{6.5}{8}\selectfont \label{tab:performance_comparison} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Label Fraction}& \multirow{2}{*}{Setting}& \multirow{2}{*}{Method}& \multicolumn{2}{c|}{CIFAR10}&\multicolumn{2}{c|}{CIFAR100}&\multicolumn{2}{c|}{MiniImageNet}\cr\cline{4-9} & &&5-layer CNN&ResNet-50&5-layer CNN&ResNet-50&5-layer CNN&ResNet-50\cr \hline \multirow{10}{*}{1\%}& \multirow{4}{*}{IID} &FedAvg (Supervised)&31.84&26.68&9.35&8.09&5.83&5.42\cr &&FedAE&35.98&36.86&13.36&14.53&11.71&12.84\cr &&FedPR&34.51&36.47&13.15&14.20&11.52&12.34\cr &&FedSimCLR&43.95&50.00&22.16&23.01&19.14&19.67\cr &&FedCA (ours)&{\bf 45.05}&{\bf 50.67}&{\bf 22.37}&{\bf 23.32}&{\bf 19.20}&{\bf 20.22}\cr\cline{2-9}& \multirow{4}{*}{Non-IID} &FedAvg (Supervised)&20.99&17.72&6.22&5.37&3.92&3.03\cr &&FedAE&23.08&23.43&9.96&9.63&8.45&8.43\cr &&FedPR&22.83&23.17&9.83&9.38&8.30&8.58\cr &&FedSimCLR&26.08&26.03&14.30&14.02&11.02&10.89\cr && FedCA (ours)&{\bf 28.96}&{\bf 28.50}&{\bf 17.02}&{\bf 16.48}&{\bf 13.39}&{\bf 13.03}\cr\hline \multirow{10}{*}{10\%}& \multirow{4}{*}{IID} &FedAvg (Supervised)&50.87&40.44&16.18&14.47&13.46&12.76\cr &&FedAE&51.88&53.64&21.77&22.45&21.73&21.96\cr &&FedPR&51.38&53.32&21.30&21.21&21.67&21.58\cr &&FedSimCLR&59.27&60.67&31.11&31.56&28.45&28.79\cr && FedCA (ours)&{\bf 59.91}&{\bf 61.02}&{\bf 31.37}&{\bf 32.09}&{\bf 28.93}&{\bf 29.44}\cr\cline{2-9}& \multirow{4}{*}{Non-IID} &FedAvg (Supervised)&30.62&21.69&14.90&13.98&11.88&10.13\cr &&FedAE&32.07&32.19&18.77&18.98&13.48&13.65\cr &&FedPR&31.04&31.78&18.39&18.34&13.30&13.24\cr &&FedSimCLR&32.52&33.83&19.91&20.01&15.90&16.03\cr && FedCA (ours)&{\bf 35.78}&{\bf 36.28}&{\bf 21.98}&{\bf 22.46}&{\bf 18.67}&{\bf 18.89}\cr\hline \end{tabular} \caption{Top-1 accuracies (\%) of algorithms for \textit{FURL} on semi-supervised learning} \label{table-2} \end{table*} \begin{table*}[!t] \centering \fontsize{6.5}{8}\selectfont \label{tab:performance_comparison} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Setting}& \multirow{2}{*}{Method}& \multicolumn{2}{c|}{CIFAR100 $\rightarrow$ CIFAR10}&\multicolumn{2}{c|}{MiniImageNet$\rightarrow$CIFAR10}&\multicolumn{2}{c|}{MiniImageNet$\rightarrow$CIFAR100}\cr\cline{3-8} & &5-layer CNN&ResNet-50&5-layer CNN&ResNet-50&5-layer CNN&ResNet-50\cr \hline -&Random init&86.70&93.79&86.60&93.05&58.05&70.52\cr \hline \multirow{4}{*}{IID} &FedAE&87.33&94.23&86.74&94.23&58.82&71.36\cr &FedPR&87.22&93.89&87.33&93.55&58.23&70.78\cr &FedSimCLR&87.80&94.88&88.03&94.87&59.08&71.85\cr &FedCA (ours)&{\bf 88.04}&{\bf 95.03}&87.91&{\bf 94.94}&58.91&{\bf 71.98}\cr\hline \multirow{4}{*}{Non-IID} &FedAE&87.37&94.35&87.00&94.06&58.56&71.17\cr &FedPR&86.97&93.91&86.92&93.55&58.39&70.25\cr &FedSimCLR&87.04&94.02&86.81&93.97&58.11&70.91\cr & FedCA (ours)&{\bf 87.75}&{\bf 94.69}&{\bf 87.66}&{\bf 94.16}&{\bf 58.93}&{\bf 71.32}\cr \hline \end{tabular} \caption{Top-1 accuracies (\%) of algorithms for \textit{FURL} on transfer learning} \label{table-3} \end{table*} \subsubsection{Training Details.} We compare our approach with baseline methods on different encoders including 5-layer CNN \cite{krizhevsky2012imagenet} and ResNet-50 \cite{he2016deep}. The encoder mapping input samples to representations with 2048-dimension and then a multilayer perceptron translate the representations to a vector with 128-dimension used to calculate contrastive loss. Adam is used as optimizer and the initial learning rate is 1e-3 with 1e-6 weight decay. We train models for 100 epochs with a mini-batch size of 128. We set dictionary size $K=1024$, momentum term of temporal ensembling $\alpha =0.5$ and scale factor $\beta=0.01$. 3200 images randomly sampled from STL-10 are used for the alignment module. Data augmentation for contrastive representation learning includes random cropping and resizing, random color distortion, random flipping and Gaussian blurring. \subsection{Evaluation Protocols and Results} \subsubsection{Linear Evaluation} We first study our method by linear classification on fixed encoder to verify the representations learned in \textit{FURL}. We perform \textit{FedCA} and baseline methods to learn representations on CIFAR10, CIFAR100 and MiniImageNet without labels respectively in federated setting. Then we fix the encoder and train a linear classifier with supervision on entire datasets. We train this classifier with Adam as optimizer for 100 epochs and report top-1 classification accuracy on test dataset of CIFAR10, CIFAR100 and MiniImageNet. As shown in Table \ref{table-1}, federated averaging with contrastive learning works better than other unsupervised approaches. Moreover, our method outperforms all of the baseline methods due to the modules designed for \textit{FURL} as we expect. \subsubsection{Semi-Supervised Learning} In federated scenarios, the private data at clients may be only partly labeled, so we can learn a representation model without supervision and fine-tune it on labeled data. We assume that each client has 1\% and 10\% labeled data respectively. First, we train a representation model in \textit{FURL} setting. Then we finetune it (followed by a MLP consisting of a hidden lay and a ReLU activation function) on labeled data for 100 epochs with Adam as optimizer and learning rate $lr=1e-3$. Table \ref{table-2} reports the top-1 accuracy of various methods on CIFAR10, CIFAR100 and MiniImageNet. We observe that the accuracy of global model trained by federated supervised learning on limited labeled data is significantly bad, and using the representation model trained in \textit{FURL} as initial model can improve performance more or less. Our method outperforms other approaches, suggesting that federated unsupervised representation learning benefits from designed modules of \textit{FedCA}, especially in Non-IID setting. \subsubsection{Transfer Learning} A main goal of \textit{FURL} is to learn a representation model from decentralized and unlabeled data for personalized downstream tasks. To verify if the features learned in \textit{FURL} is transferable, we set the models trained in \textit{FURL} as initial models and then a MLP is used to be trained along with encoder on other datasets. The image size of CIFAR (32*32*3) is resized to be the same as MinImageNet (84*84*3) when we fine-tune the model learned from MiniImageNet on CIFAR. We train it for 100 epochs with Adam as optimizer and set learning rate $lr=1e-3$. Table \ref{table-3} shows that the model trained by FedCA achieves an excellent performance and outperforms all of the baseline methods in Non-IID setting. \subsection{Ablation Study} We perform the ablation study analysis on CIFAR-10 in IID and Non-IID settings to demonstrate the effectiveness of alignment module and dictionary module (with temporal ensembling). We implement (\romannumeral1) FedSimCLR, (\romannumeral2) federated contrastive learning with only alignment module, (\romannumeral3) federated contrastive learning with only dictionary module, (\romannumeral4) federated contrastive learning with only dictionary module based on temporal ensembling, (\romannumeral5) FedCA respectively and then a linear classifier is used to evaluate the performance of the frozen representation model with supervision. Figure \ref{fig-5} shows the results . \begin{figure}[h] \centering \includegraphics[scale=0.23]{fig-5.pdf} \caption{Ablation Study of Modules Designed for \textit{FURL} by linear classification on CIFAR-10 (ResNet-50).} \label{fig-5} \end{figure} We observe that the alignment module improves the performance by ~1.4\% in both IID and Non-IID settings. With the help of dictionary module (without temporal ensembling), there are 2.5\% and 2.7\% increase in accuracy under IID and Non-IID setting respectively. Moreover, we note that the representation model learned in \textit{FURL} benefits from temporal ensembling technique in Non-IID setting than in IID setting, probably because the features learned in IID setting are stable enough so that the temporal ensembling plays a far less important role in IID setting than in Non-IID setting. Fortunately, the model achieves excellent performance when we combine federated constrative learning with alignment module and dictionary module based on temporal ensembling, which suggests that these two modules can work collaboratively and help to tackle the challenges in \textit{FURL}. \section{Conclusions} We formulate a significant and challenging problem \textit{Federated Unsupervised Representation Learning (FURL)} and show two main challenges of this problem: \textit{inconsistency of representation spaces} and \textit{misalignment of representations}. In this paper, we propose a contrastive learning-based federated learning algorithm named FedCA composed of the dictionary module and alignment module to tackle above challenges. Thanks to these two modules, FedCA enables distributed local models to learn consistent and aligned representations while protecting data privacy. Our experiments demonstrate that FedCA outperforms those algorithms that solely combine federated learning with unsupervised approaches and provides a stronger baseline for \textit{FURL}. In future work, we plan to extend FedCA to cross-modal scenarios where different clients may have data in different modes such as images, videos, texts and audios.
proofpile-arXiv_059-15760
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\subsection{Secure Broadcast Ranging Protocol} \Cref{fig:appendix-protocol} represents our proposed secure broadcast ranging protocol. \begin{figure*}[t] \procedureblock{}{ \textbf{Initiator}~\initiator \> \< \textbf{Reflector}~\reflector_k \pclb \pcintertext[dotted]{Start Synchronization session (optional)} \sync\xleftarrow{} \{E_{\key}(\initiator,\epoch)|\var{Postamble}\} \phantom{123456} \> \sendmessageright*{\sync} \< \\ \> \< \epoch \xleftarrow{} \fn{syncEpoch}(\sync) \pclb \pcintertext[dotted]{End Synchronization session} \pclb \pcintertext[dotted]{Start Ranging session} \req \xleftarrow{} H_{\key}(\initiator,\epoch) \> \sendmessageright* {\req} \< \\ \{T^k_{W,n}\} \xleftarrow{} \fn{genDelay}(H_\key,\reflector_k,\epoch,n,W) \> \< \{T^k_{W,n}\} \xleftarrow{} \fn{genDelay}(H_\key,\reflector_k,\epoch,n,W) \\ \{\resp^k_{n}\} \xleftarrow{} H_\key(\reflector_k,\epoch,n) \> \< \{\resp^k_{n}\} \xleftarrow{} H_\key(\reflector_k,\epoch,n) \pclb \pcintertext[center]{(Start Batch Response)} \> \< (\text{wait } T^k_{W,0}) \\ \> \sendmessageleft*{\resp^k_0} \< \\ \> \vdots\phantom{12345678912345} \< \vdots\phantom{12345} \\ \> \< (\text{wait } T^k_{W,n}) \\ \> \sendmessageleft*{\resp^k_n} \< \pclb \pcintertext[center]{(End Batch Response)} \text{Compute ToF} \> \< \\ \batch \xleftarrow{} \{\resp^k_n\} \> \< \\ \tilde{\batch} \xleftarrow{} \fn{scanResponses}(\batch) \> \< \\ ToF_k \xleftarrow{} \fn{estimateToF}(\tilde{\batch},\{T^k_{W,n}\}) \> \< \pclb \pcintertext[dotted]{End Ranging session} } \caption{Secure Broadcast Ranging protocol.} \label{fig:appendix-protocol} \end{figure*} \section{Proposed Protocol: Overview} \label{sec:problem-approach} Our approach is time-based, because it is easier to secure (requiring more efforts to overcome to timing constraints) than energy-based and phase-based ranging. \subsection{Threat Model} \label{sec:threat-model} The system consists of multiple nodes executing the ranging protocol at any given time to learn their distance. The main purpose of an external adversary is \emph{either to learn the private distance information, to block the ranging, or to deceive the nodes into perceiving a wrong distance}. \BfPara{Cryptographic Capabilities} Ranging nodes use a shared key $\key$ for cryptographic operations. We assume the key $\key$ is shared using an out-of-band channel. Messages exchanged for ranging are produced using either a symmetric encryption or a cryptographic pseudorandom function (PRF) with the shared key $K$. The cryptographic operations we use are assumed to be secure and cannot be broken by the adversary. \BfPara{Communication Capabilities} The communication parameters, such as center frequency, bandwidth, modulation, coding are public, and the adversary is able to capture and transmit signals on the same channel as the ranging nodes. The adversary can create interference with the ranging transmissions by transmitting its own signal. However, as messages are derived using secure cryptographic operations, the adversary cannot predict the transmitted signal (indistinguishable from random), and consequently he is unable to annihilate or modify the signal in a meaningful way to himself. We also assume that the adversary cannot transmit signal with higher or lower propagation speed compared to other transmitters in the same wireless medium. This assumption implies that a signal replay will increase the ToF. \BfPara{Mobility} Nodes are assumed to be mobile, and at the time of a ranging session, a node has no knowledge whether others are in its communication range. Similarly the adversary does not know the exact location of the ranging nodes. However, the adversary may know whether ranging nodes are in their communication range based on side channel information. \BfPara{Attack Scenarios} We consider all the attack scenarios discussed in~\Cref{sec:background}, namely sniffing, denial of service, distance enlargement, and distance reduction attacks. In all scenarios, we assume honest (non-malicious) ranging nodes in the presence of an adversary. \subsection{Approach Overview} \label{sec:approach} \begin{figure*}[thb] \centering \includegraphics[width=0.8\textwidth]{figures/protocol-multi.pdf} \caption{Example of Secure broadcast ranging protocol between one initiator and two reflectors. The SYNC message is sent only if nodes are unsynchronized. The postamble is marked with a black star as the only public content in the protocol. The initiator broadcasts the REQ and expects to receive responses as a combined signal (collision) from reflectors. Both REQ and RESP messages are secret symbol sequences without a preamble (in contrast with the SYNC). In this example, each reflector replies with 3 responses. The waiting period between the REQ and the first RESP and between consecutive RESPs are cryptographically randomized.} \label{fig:multidelay} \end{figure*} \begin{table}[t] \small \centering \begin{tabular}{cl} \toprule $\initiator$ & Initiator node starting a ranging session \\ $\reflector_k$ & Reflector node sending back response to initiator\\ $\key$ & Shared key used by ranging nodes \\ $E_{\key}(m)$ & Encryption of message $m$ using key $\key$ \\ $H_{\key}(m)$ & Secure PRF of $m$ using key $\key$ (e.g., HMAC-SHA2) \\ $\epoch$ & Current time epoch \\ $\Delta{}\epoch$ & Time epoch duration \\ $t_S$ & Send time of the request recorded by initiator \\ $t_R$ & Received time of the response recorded by initiator \\ $T$ & Sampling period \\ $T_E$ & Subsample timing error \\ $T_W$ & Random waiting period between request and response \\ $T_{RESP}$ & Fixed duration of a ranging response \\ $W$ & Waiting window\\ $\batch$ & A batch of responses replied per reflector\\ \bottomrule \end{tabular} \caption{Notations.} \label{tab:notations} \end{table} We consider a ranging session between multiple nodes, in which one node, called the \emph{initiator} $\initiator$, is interested in discovering the distance from itself to other nodes, called \emph{reflectors}\footnote{From the security perspective, the initiator is known as the \emph{verifier}, who initiates the secure ranging protocol and makes judgement about the distance computed based on responses from the reflectors that play the role of the private information \emph{prover}. However, this terminology is used when the prover is not necessarily honest.} $\reflector_1,\reflector_2,\ldots$ Our notations are summarized in~\Cref{tab:notations}. The ranging session is started by the initiator. In our ranging protocol, we introduce the \emph{time epoch} as the discrete time instant computed from the real-time system clock as $\epoch=\lfloor{}t/\Delta{}\epoch\rfloor$ with $t$ and $\Delta{}\epoch$ representing the current real time and the period between two consecutive epochs, respectively. Our ranging protocol, illustrated in~\Cref{fig:multidelay}, is a time-based ranging protocol consisting of the following steps. \begin{enumerate}[wide] \item \emph{Synchronization:} The main goal of this step is to ensure the nodes in our system are loosely synchronized with respect to the time epoch. It is \emph{not} required for every ranging session. To perform the synchronization, the initiator transmits a $\sync$ message consisting of the initiator's identifier and current time epoch $\epoch$ IND\$-CPA encrypted (indistinguishable from random) with the shared key $\key$~\cite{Rogaway2004}. \begin{equation} \label{eq:sync} \sync := \{E_{\key}(\initiator,\epoch) | \var{Postamble}\}. \end{equation} The synchronization message is embedded in a frame with a public postamble that enables the reflectors to receive it. \item \emph{Requesting:} The ranging procedure is started by the initiator choosing a random epoch and sending out a ranging request $\req$ at the beginning of the epoch. If a prior $\sync$ message is required, $\req$ is sent right after that. The $\req$ message is a fixed-length sequence of physical signal symbols generated with a cryptographic PRF, shared key $\key$, the initiator's identifier, and current time epoch $\epoch$. \begin{equation} \label{eq:req} \req := H_{\key}(\initiator,\epoch). \end{equation} In contrast to $\sync$, no public preamble or postamble is used for $\req$. On the other side, as long as $\epoch$ is synchronized (based on a previous synchronization), the reflectors begin scanning for the secret sequence. \item \emph{Responding:} Once receiving the ranging request, each reflector $\reflector_k$ will send back a response batch $\batch$ consisting of multiple $\resp^{k}_{n}$ messages ($n=0\ldots{}|\batch|-1$), which are secret symbol sequences hashed with the shared key $\key$. \begin{equation} \label{eq:resp} \resp^k_{n} := H_\key(\reflector_k,\epoch,n). \\ \end{equation} Each response is transmitted at an exact time specified by a random waiting period $T_W$, a secret value generated per the reflector using a cryptographic PRF and shared key (on reflector identifier, $\tau$, and response counter). The use of multiple responses and random waiting periods by each reflector not only improves the accuracy of distance computation for the ranging session, but also increases the challenges to the adversary. The ranging session is ended when the reflectors complete all $|\batch|$ responses, and finally the initiator computes the distance to each of them. \end{enumerate} We now highlight key properties of our protocols. A more thorough discussion is presented in~\Cref{sec:protocol,sec:security}. \BfPara{Secure Ranging} A major reason for successful exploits (e.g., ED/LC attacks) on the physical layer of existing ranging systems is rooted from the adversary's knowledge of preambles prepended to the request and response exchanged between nodes. Using public preamble is, in fact, the typical design of today's existing communication systems for the purpose of signal synchronization and channel estimation. In our ranging protocol, the request and response are indistinguishable from noise, and are detectable and decodable only by ranging users. This effectively provides protection against attacks based on early preamble detection. We also note that the synchronization is not a part of the distance computation process, and it is infrequently transmitted. The use of secret sequences of random symbols not only shields us from being detected, but also mitigates the impact of jamming attacks. This essentially enables our ranging system to operate under low signal-to-noise (SNR) conditions. The use of random waiting period $T_W$ prevents the adversary from learning the actual distance, even with coarse accuracy. One can notice that our protocol is asymmetric (i.e., no feedback from the initiator after the responses). As a result, the reflectors are unable to estimate the distance to the initiator with a good accuracy. This is our intended design from the security perspective, which protects the private distance information from honest-but-curious reflectors. For applications where this security feature is not required, our ranging protocol can be extended (e.g., by making the protocol symmetric or using the higher layer for exchanging information afterwards) to disclose the distance to reflectors without the need of initiating a new ranging session from each reflector. \BfPara{Broadcast Ranging} Our ranging protocol does not put constraints on the order of responses, which can come back at the initiator in form of a collision. The collision, however, does not prevent the initiator from learning the distance, but enables saving communication bandwidth for the whole system while increasing challenges to the adversary. At the initiator, individual responses from each reflector can be extracted from the combined received signal thanks to the independence of encrypted symbol sequences. In addition, we use a Successive Interference Cancellation (SIC) technique to improve the receptability and accuracy of responses from far-away reflectors under the interference from close-by ones. \BfPara{Accuracy} With the two-way ranging procedure, the initiator can unlock itself from the dependency on the reflectors' clock. However, the accuracy of distance estimation tightly depends on the sample period (higher sampling rate results in better accuracy). In this work, we build our ranging prototype on USRP platform and operate the system under \SI{100}{\MHz}. This gives the timing error of $\SI{\pm 5}{\ns}$ (equivalent to distance error of $\SI{\pm 1.5}{m}$), making sample rate the bottleneck in achieving high accuracy of ToF estimation. To overcome this issue, we use a subsampling interpolation technique to achieve subsample ToF estimation and significantly improve the accuracy. \BfPara{Spectrum Flexibility} The need for large bandwidth for accurate ranging (e.g., IW-UWB with \SI{500}{\MHz}) limits the application of existing ranging protocols, as many communication systems operate in limited spectrum (e.g., ISM band). In contrast, our ranging system can operate in an \emph{upsampling} mode, where the signal bandwidth is much smaller than the sample rate, limiting interference to other communications. \section{Background and Related Work} \label{sec:background} \subsection{Ranging Techniques Overview} Most popular ranging techniques fall into one of three main types: energy-based, phase-based, and time-based estimation. \BfPara{Energy-based Ranging} The core idea of energy-based ranging relies on the assumption that the relationship between the distance and the Received Signal Strength can be expressed by a simple path loss model. While this model provides an easy and inexpensive method to compute the distance, it usually achieves poor accuracy (in order of meters) even in free-space environments due to the complex wireless channel propagation and fading effects and irregular antenna pattern. As a result, it is typically only used in low-cost systems, where complex signal processing cannot be afforded~\cite{zanella16rssranging}. \BfPara{Phase-based Ranging} When a signal is transmitted on a certain frequency, its phase change is a periodic function of the travel time. Transmitting signals on a set of different frequencies and measuring the received phases, the distance can be derived unambiguously. This is the basis for phase-based ranging typically used in multitone communication systems. Since clock synchronization is not required between nodes, phase-based ranging is suitable for many applications. For instance in Wi-Fi systems, previous work has demonstrated that decimeter accuracy could be achieved~\cite{vasisht16chronos,manikanta15spotfi}. \BfPara{Time-based Ranging} In contrast to phase-based ranging, the time-based ranging technique relies on the transmission and reception time of a signal, and the distance is computed based on the Time of Flight (ToF)~\cite{dardari09}. Time-based ranging systems require high precision of ToF measurements. For example, a small error of \SI{10}{\ns} can result in a large distance error of \SI{3}{\m}. Without time synchronization, the ranging between nodes is typically performed in a two-way fashion, where a return signal is sent back to help estimate the ToF independently of the time reference at the other end. With the recent introduction of ranging capability into the IEEE 802.15.4 standard~\cite{ieee802154}, ranging systems based on Impulse Radio Ultra Wide Band (IR-UWB) become popular, such as Decawave~\cite{decawave}, 3dB Access~\cite{3dbaccess}. Ranging using SDR has been recently explored in \cite{sark17usrpranging}. In their work, by using USRP N210 with maximum rate of \SI{50}{MHz} together with the use of a maximum-length sequence (m-sequence) with length of $1024$ for detection, they could achieve \SI{40}{\cm} accuracy. \subsection{Ranging Attacks} \label{sec:background-attacks} \BfPara{Sniffing} While messages exchanged between ranging nodes can be encrypted to hide the embedded private data, a sniffer can infer the distance based on its observation of the transmitted signal in the wireless medium. The observation can be focused on the change in amplitude or phase, and derive an estimate of ToF. While deriving sub-metre accuracy is challenging, achieving metre level accuracy might satisfy the adversary scenarios. Furthermore, the attack success rate can be improved if a capable adversary is in proximity. \BfPara{Denial of Service} With DoS attacks, the goal of the attacker is to prevent or degrade the ranging session. To perform such attacks, jamming signals are emitted and targeted to the communication link between benign nodes. Depending on the underlying physical layer used for ranging, jamming does not have to be continuous. An example of non-continuous jamming attacks is the cicada attack~\cite{poturalski10cicada}, which targeted to the IR-UWB ranging by the use of intermittent pulses that can block or degrade the distance estimation at the receiver. \BfPara{Distance Enlargement} In this attack, an adversary aims to deceive the ranging system into thinking that the nodes are farther than the actual distance~\cite{Francillon2010RelayAO}. As a result, the system could deactivate protection mechanisms e.g., collision avoidance in self-driving cars or drones. The feasibility of these attacks depends on the adversary's capability of guessing the victim's signals in advance and adaptively generating malicious signals. The adversary can also replay the transmitted signal with overshadowing or annihilation as demonstrated in IR-UWB systems~\cite{taponecco13,singh19uwbed}. \BfPara{Distance Reduction} As the opposite of distance enlargement, the goal of distance reduction attacks is to decrease the perceived distance, often between entities who are out of their communication range, thus enabling the adversary to bypass the distance-based security system (e.g., PKES, smart home security). The basis of this attack relies on the signal relaying capability of the man-in-the-middle adversary, who is able to send a possibly modified version of the request or response such that a shorter distance is resolved. This attack is specially effective against energy-based or phase-based ranging systems, since the replay version can be crafted with an appropriate amplifier factor or phase change~\cite{olafsdottir2017security}. For time-based ranging in a theoretic setting, it is impossible to perform distance reduction attacks due to the nature of relaying that increases the ToF. Based on this principle, the distance bounding protocol~\cite{brandchaum01db} and its variants~\cite{brelurut} were developed as a logical layer on top of an existing ranging physical layer to provide the protection from distance reduction attacks. In practice, however, if the distance bounding protocol is not integrated properly, the adversary can still perform these attacks by exploiting the properties of the concrete physical layer employed in the system without breaking any cryptographic assumptions made by the distance bounding protocol. One physical-layer attack against distance bounding protocol is Early-Detect/Late-Commit (ED/LC), which exploits the latency in the modulation and demodulation in RFID radios~\cite{hancke08rfid}, or pulses characteristics of both preamble and payload in IR-UWB~\cite{manuel10dd}, or in chirp-based system~\cite{aanjhan12cbattack}. The main idea of the attack is to take advantage of the predictable signal to transmit a guessed portion of the signal earlier than the ranging node and fill the rest immediately after some legitimate part is revealed. In UWB, the early-detection phase corresponds to the guess of the whole symbol when only half of symbol is revealed, and the late commit phase corresponds to the filling of higher pulses or zeros to compensate for any incorrect guess. To prevent this attack, a recent countermeasure has been proposed for UWB that employs the pulse reordering method~\cite{singh19uwbpr}. \section{Conclusion} We proposed a secure broadcast ranging protocol with spectral flexibility that has minimal impact on accuracy. Flexibility is achieved through upsampling and successive interference cancellation. Stealth and security are achieved through cryptographically randomized messages in time and code. We analyze its security over various attacks such as denial of service, distance enlargement, distance reduction and sniffing. The protocol is designed for flexible implementation on FPGA and/or a host (with minimal FPGA modifications). We evaluate our ranging system extensively under scenarios both in real over-the-air experiments for a pair of devices, and simulations for scalability to a large number of reflectors. We demonstrate through extensive performance evaluation that we can achieve an accuracy below \SI{20}{\cm} on a wide range of SNR (as low as $\SI{0}{dB}$), spectrum $\SI{25}{MHz}-\SI{100}{MHz}$ even when 20 simultaneous reflectors are constrained to sessions of $\SI{100}{us}$ leading to 10,000 simultaneous sessions per second. For sessions of $\SI{1}{ms}$ leading to 1,000 simultaneous sessions per second, the systems easily scales to over 100 reflectors. \subsection{System Implementation} We implement the secure broadcast protocol using GNU Radio platform and UHD framework. In a ranging session, transmissions between nodes are BPSK-modulated signal from random binary sequences of $L$ bits. When the upsampling mode is enabled, the signal is interpolated with low-pass filter before being transmitted. Since random sequences are derived from the epoch $\epoch$, they can be pre-generated as soon as the epoch $\epoch$ is determined to reduce the processing delay. On the receiving side, a signal detector is constructed for each sequence. The cross-correlation is computed by an FIR filter with taps as the detecting sequence in the reversed order. The filter operation is optimized based on the FFT version of the FIR filter. \BfPara{Initiator Implementation} Regarding the initiator, who uses multiple detectors to simultaneously search for response sequences from multiple reflectors, high processing capability is required for heavy operations such as correlation computation and SIC procedure. For this reason, we implement the initiator on the host. \BfPara{Reflector on FPGA} \label{appendix:fpga} The low processing demand allows us to move crucial processing tasks of the reflector to FPGA. The FPGA implementation removes the need to stream data from the SDR device to host for any signal processing task. By avoiding data streaming, we reduce a significant amount of communication and processing overhead for the host. This enables us to deploy a reflector even on a slow laptop, which not only provides mobility convenience, but also saves us significant time during over-the-air experiments. Moreover, since data is processed directly in the FPGA, the operation follows a synchronous routine with respect to the master clock \SI{200}{\MHz}, and together with the capability to buffer responses in the memory, the FPGA provides a fast detection and reaction within couples of $\SI{5}{ns}$ clock cycles. \BfPara{Hardware Latency} In the host and FPGA implementation of initiator and reflector, there are processing latencies caused by the ADC (receive chain) and DAC (transmit chain) during the signal conversion between analog and digital domain. Therefore, the ToF expression in~\Cref{eq:timing1,eq:timing2} for the over-the-air experiments also include an additional hardware latency $T_{HW}$. The ToF with hardware latency would be rewritten as \begin{equation*} \widetilde{ToF} = \frac{1}{2}(t_R-t_S-T_W-T_{\var{\resp}}-T_{HW}) - T_E \end{equation*} Since $T_{HW}$ is constant, it can be determined through a linear fitting on experimental values. We perform extensive measurements between two devices at difference distances. The data is collected for each distance in range from \SI{1}{\m} to \SI{14}{\m} with steps of \SI{1}{\m} in a large backyard with a line of sight. At each position, we perform $1000$ measurements and derive the result. We verified that this hardware latency value is applicable to devices of the same model. \subsection{Narrowband Sub-meter Ranging} \label{sec:distance-computation} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figures/timing.pdf} \caption{Timing analysis of an example ranging session carried out in bandwidth of $\SI{100}{MHz}$ ($T=\SI{10}{ns}$). Signals are sampled at positive clock edges. The initiator records the sent time $t_S$ at the request's last symbol and the received time $t_R$ at the response's last symbol. The secret random waiting period $T_W$ and response duration $T_{RESP}$ are known to the initiator. The ToF is computed by the initiator based on the $\var{RTT}=t_R-t_S$ and the estimation of timing error $t_E$.} \label{fig:timing} \end{figure*} \subsubsection{Timing Analysis} \label{sec:timing-analysis} Consider the ranging session consisting of one request-response pair, where an initiator $\initiator$ starts a ranging request and a reflector $\reflector$ replies with a response (\Cref{fig:timing}). The start time $t_S$ is recorded as the sent time of the last symbol of the request, and the received time $t_R$ is recorded at the last symbol of the response. Both $t_S$ and $t_R$ are based on the initiator's clock. As such, the distance is computed solely based on the time recordings done at the initiator. The reflector's time is not used for the distance computation, whereas the random waiting period $T_W$ and the response duration $T_{\var{RESP}}$ are known to the initiator. For the ranging purpose, we are interested in the precise timing of the transmission. Since digital samples are transmitted on clock edges, the recorded value $t_S$ is the actual sent time. The recorded received time $t_R$, however, might not reflect the actual arrival time of the message, as signals are only sampled at discrete time instants with a sample period $T$ while the signals can arrive anytime within the period. The difference between the recorded and actual arrival time is represented by the timing error $T_E$. A positive value of $T_E$ indicates that the actual arrival time is earlier than the recorded receive time, whereas $T_E$ is negative in case the signal is early sampled. Base on the timing diagram in~\Cref{fig:timing}, the Round Trip Time (RTT) is formulated as: \begin{equation} \label{eq:timing1} \var{RTT} = t_R-t_S = 2(\var{ToF} + T_E) + (T_W + T_{\var{RESP}}) \end{equation} In \Cref{eq:timing1}, we have made an assumption that the ToF is considered symmetric between nodes (the travel time is the same from either side) during the very short ranging duration. The ToF can be derived as \begin{equation} \label{eq:timing2} ToF=\frac{1}{2}(t_R-t_S-T_W-T_{\var{RESP}}) - T_E \end{equation} It is worth noting that while a clock drift might exist between the initiator and reflector (typically up to $\pm\SI{40}{ppm}$), it is negligible in comparison with the timing error $T_E$, which can be as large as the sample period ($-T<T_E<T$). Therefore, $T_W$ and $T_{\var{RESP}}$ are considered deterministic and identically observed on both sides. The unknown and dominant factor determining the accuracy of ToF is the timing error $T_E$, which will be addressed in the following discussion. \subsubsection{Time of Flight Estimation} \label{sec:tof-estimation} The Time of Flight estimation involves the initiator detecting the response to obtain the recorded received time $t_R$ and timing error $T_E$. Our signal detection method relies on the computation of cross-correlation between received signal and known response signal. The ranging request and responses are secret random sequences of complex symbols generated as in~\Cref{eq:req,eq:resp}. They possess good correlation properties (\Cref{sec:sequences}) such that the correlator outputs with high probability a large correlation value, when the pattern is present in the received signal, while producing significantly smaller value for noise or an uncorrelated sequence. Not only the presence of the pattern can be detected, the position of the peak also determines the received time instant $t_R$. Since the cross-correlation is energy-sensitive, we use its normalized version (relatively to the received signal energy) to enable detection in extreme cases of low and high Signal to Interference plus Noise Ratio (SINR). The normalized cross-correlation $C_l$ computed on a $L$-length sequence at a time lag $l$ is given by \begin{equation} \label{eq:xcorr-norm} C_l = \frac{\sum_{n=0}^{L-1} r_{n-l} p_n^*} {\sqrt{(\sum_{n=0}^{L-1}|r_{n-l}|^2) (\sum_{n=0}^{L-1}|p_n|^2)}} \end{equation} where $*$ denotes the complex conjugate operator, and $\{r\}$, $\{p\}$ are the received samples and the pattern to be detected, respectively. The pattern is located at position $M$, where the peak of the absolute value of $C_l$ is found. \begin{equation} \label{eq:peak} t_R = M = \argmax_l{}|C_l| \quad \textrm{if } C_l\ge\alpha C_{l'}, \quad l\ne{}l', |l-l'|\le L_0 \end{equation} In~\Cref{eq:peak}, a peak is the local maximum of cross correlation that is significantly higher than the values in its vicinity $[l-L_0,l+L_0]$ by a ratio threshold $\alpha$. Setting $\alpha$ and $L_0$ appropriately reduces the false positive detection rate. \BfPara{Timing Correction} \label{subsample} The peak computed based on~\Cref{eq:xcorr-norm,eq:peak}, however, only approximates the true peak, which could be found if we were able to sample the signal exactly at arrival time. In reality, the received time $t_R$ might be recorded earlier or later than the actual arrival time with average error $E[T_E]=T/2$. For example, if our system is sampling at \SI{100}{\MHz} (i.e., $T=\SI{10}{\ns}$), without additional technique the average distance error would be $\SI{1.5}{\m}$. To improve the accuracy, we estimate the timing error $T_E$ by interpolating the cross-correlation to achieve the subsample peak. The subsampling interpolation is based on the model that the vicinity of the true peak on the cross correlation curve can be approximated by an analytical function. Essentially, the true peak can be computed based on a few adjacent points in its surrounding region. This technique is typically used in digital ultrasonic measurement systems, where the distance to a target body (e.g., sea ground) is measured by emitting a signal and estimating the RTT from the passively reflected signal. The average timing error in this case is half of that in comparison to in our digital ranging system, because there is no timing error at the passive reflector in ultrasonic measurement systems. In this work, we considered several approximation functions for interpolation~\cite{svilainis13subsample,svilainis2008analysis,viola2005spline} and found Gaussian function achieves the best balance between accuracy improvement and computational complexity. Specifically, if $C_M$ denotes the cross-correlation at the peak found in~\Cref{eq:peak}, the discrete-time values around $C_M$ can be modelled by a Gaussian function $C(x)=a\cdot{}\exp\left(-\frac{(x-b)^2}{2c^2}\right)$ for $x\in[M-1,M+1]$. Using the points $C_{M-1},C_M,C_{M+1}$, we can compute the true peak and derive the the timing error as \begin{equation} T_{E} = -T \frac{\ln{C_{M+1}} - \ln{C_{M-1}}} {4 \ln{C_{M}} - 2 \ln{C_{M-1}} - 2 \ln{C_{M+1}}}. \end{equation} \subsection{Pair Ranging} \subsubsection{Accuracy} The pair ranging evaluation is carried out in a real testbed. The initiator and the reflector are placed at increasing distances of each other. Both nodes are at the same height of \SI{1.2}{\m} and operate at sample rate of \SI{100}{MHz}. Each node is configured to transmit with \SI{20}{\dB} gain and receive with \SI{30}{\dB} gain. The batch mode is used with the batch size $|\batch|=10$. For each specific ground truth distance, we perform 10 ranging sessions and compute the average estimated distance and average distance error (mean of absolute error). \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/outdoor.eps} \caption{Pair ranging in outdoor environment.} \label{fig:outdoor} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/outdoor-error.eps} \caption{Accuracy of pair ranging in outdoor environment.} \label{fig:outdoor-error} \end{figure} \Cref{fig:outdoor,fig:outdoor-error} show the average estimated distances and average estimation error in an outdoor environment (a large backyard with fence and a few trees) for distances varying from \SI{1}{\m} to \SI{14}{\m}. To show the crucial role of timing error estimation with subsample interpolation, we also show the result of measurements without applying subsample interpolation. It can be clearly observed from \Cref{fig:outdoor-error} that the average distance error is around $\SI{15}{\cm}$ with subsample interpolation, while without this fine-grained timing estimation, the error is more fluctuating and can go up to \SI{75}{\cm}. In the latter case, the zig-zag pattern is the evidence for the coarse sampling resolution, where the error is high when the ToF is not a multiple of the sample period. The smoother curve with subsample interpolation proves its effectiveness in timing error correction. \begin{figure}[h] \includegraphics[width=0.47\textwidth]{figures/eval/indoor-error.eps} \caption{Accuracy of pair ranging in \textit{indoor} environment.} \label{fig:indoor-error} \end{figure} We also carry out an indoor experiment inside a $4\times9\times\SI{3.5}{\m}$ room with wood floor, doors and glass windows (\Cref{fig:indoor-error}). The estimated distances have an average error of \SI{20}{\cm}, which is slightly higher than the error in the outdoor experiment due to multipaths that degrade the estimation. While the indoor accuracy depends on the specific environment, the results with subsample interpolation generally outperform those without this technique. The subsample estimation is also more stable due to the fine-grained resolution. \subsubsection{Robustness} We evaluate our system's robustness by conducting an experiment under different Signal-to-Noise Ratio (SNR) situations. There is no mutual interference in this pair ranging scenario. We compare the system in three different distance settings: $\SI{4}{m},\SI{8}{m},\SI{12}{m}$. For each run, we use a fixed receive gain while varying the transmit gain of both initiator and reflector to change the SNR level of the received signal at each node. In \Cref{fig:snr}, the average distance error (over 10 runs) fluctuates with a deviation of \SI{\pm10}{\cm} over different SNRs. The result in this experiment is an indicator showing that our ranging protocol still achieves acceptable accuracy under low SNR regime. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/snr.eps} \caption{Accuracy under different SNR conditions for various distances in outdoor environment.} \label{fig:snr} \end{figure} \subsubsection{Spectrum Flexibility} We evaluate the distance accuracy obtained in the upsampling mode, in which our ranging signals are shrunk to a narrower band. The original signals are upsampled with different interpolation factors and low-pass filtered at a sample rate $f_S=\SI{100}{MHz}$ to generate signals of different bandwidths. For each bandwidth setting, we run the experiment to collect the estimations for a set of ground truth distances. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/upsample-outdoor-error.eps} \caption{Outdoor pair ranging experiment with various signal bandwidths of \SI{100}{\MHz}, \SI{50}{\MHz}, \SI{25}{\MHz} using an upsampling factor of $1$, $2$, $4$ respectively. Sample rate is fixed at $\SI{100}{MHz}$.} \label{fig:upsample-outdoor-error} \end{figure} \Cref{fig:upsample-outdoor-error} shows the results of the upsampling experiment, where each data point is an average over 10 runs. The accuracy is comparable in all three cases with the average error of \SI{20}{\cm} on the evaluated distances. It is observed from this experiment that the distance accuracy in our system is not dependent on the ranging signal bandwidth. In fact, we can narrow down the ranging spectrum by 4 times without hurting the performance. \subsubsection{Effect of Sample Rate} As opposed to the signal bandwidth, the sample rate is the key factor impacting the ranging accuracy. To verify this theory, we conduct a separate experiment to analyze the impact of sample rate. In this experiment, we use the interpolation factor of 1, i.e., the signal bandwidth is always equal to the sample rate. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/rate-outdoor-error.eps} \caption{Accuracy impacted by different sample rates.} \label{fig:rate-outdoor-error} \end{figure} \Cref{fig:rate-outdoor-error} shows results of this experiment confirming the improvement in accuracy from \SI{50}{\cm} to \SI{30}{\cm} and \SI{15}{\cm} when the sample rate is increased from \SI{25}{MHz} to \SI{50}{MHz} and \SI{100}{MHz}, respectively. \subsection{Setup and Methodology} Throughout the evaluation, we fix the epoch duration to $\Delta\epoch=\SI{1}{s}$. Unless otherwise stated, we enable the Successive Interference Cancellation (SIC) and the response batch mode with the batch size $|\batch|=10$ by default. In both the testbed experiments and simulations, the sample rate is set to $f_S=\SI{100}{MHz}$ (unless otherwise noted), while the actual signal bandwidth $B$ can vary from $\SI{100}{MHz}$ to $\SI{25}{MHz}$. The center frequency is fixed at $\SI{2.45}{GHz}$. We use a sequence length of $L=512$ for all scenarios. The sequence detection uses the ratio threshold $\alpha=50$ and $L_0=256$. For real testbed experiments, the setup consists of two ranging nodes. Each node is composed of a SDR device and a host machine to control it. The SDR device is a USRP X310 equipped with a UBX-160 daughterboard~\cite{wiki:usrpX310}. The device is mounted with two omnidirectional \SI{2.4}{\GHz} antennas on the same daughterboard, one for receiving and one for transmitting signals. The initiator is hosted on a HP-Z620 workstation with 2X Intel Xeon E5-2670 2.6GHz 16-Cores, 64GB RAM connected to USRP via a $\SI{10}{Gbps}$ SFP+ cable that is able to sustain a data stream at sample rate $\SI{100}{MHz}$. The reflector runs directly on the USRP device with a custom-modified FPGA for ranging purpose (See \Cref{appendix:fpga}). For simulation, we set up a wideband wireless environment using Matlab's Wideband LOS Channel. This channel models the propagation of RF signals between multiple points with a Line-of-Sight in space and includes the free-space attenuation as well as the time delay effect. The model is configured with an operating frequency of \SI{2.45}{\GHz} and sample rate of \SI{100}{\MHz}. Ranging request and response signals are generated as a sequence of configurable $L$ random bits passed to a BPSK modulator. The modulated signals are then fed to the channel simulator to undergo the effects of distance such as path loss and time delay. \subsection{Broadcast Ranging} In broadcast ranging, the key challenge is the mutual interference that can degrade the accuracy of distance estimation. The SINR of a response coming from an individual reflector is reduced when more nodes participate in the ranging sessions. The broadcast ranging evaluation in this section is carried out by simulation. \subsubsection{Equidistant Scenario} We first evaluate the broadcast ranging by equidistant scenarios, where reflectors are located randomly on a circle with the same distance to the initiator. We focus on the effect of distance and number of reflectors on the system performance. \BfPara{Failure Rate} As collisions in broadcast ranging can cause nodes to disconnect, we evaluate the system in terms of failure rate, the ratio between the number of reflectors that fail the ranging operation (i.e., no distance is estimated by the initiator due to missed or corrupted messages) and the total number of reflectors in the system. Note that a reflector succeeds if at least one of its responses can reach the initiator. We recall that the random waiting period $T_W$ between responses is randomly selected within a waiting window $W$. Setting $W$ needs to take into account the trade-off between the collision rate and the ranging session's complete time. A large value of $W$ would reduce the broadcast collision at the expense of ranging time. In the following failure rate evaluation, we first choose $W$ such that the ranging session duration $T_{\mathrm{session}}$ is around $\SI{1}{ms}$. This implies ranging utilizes only $0.1\%$ of the epoch duration $\Delta\epoch=\SI{1}{s}$. We place the reflectors randomly on a circle with the same distance to the initiator. By changing the number of reflectors in the ranging session, we collect the failures for each run. \Cref{fig:sim-equidistant-spread1ms-failure-rate} shows that with $T_{\mathrm{session}}=\SI{1}{ms}$, there are almost no collisions for broadcast ranging up to 100 nodes. The failure rate slightly increases to $0.007$ when the number of reflectors reaches 150. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/sim_equidistant_spread-1ms_failure_rate.eps} \caption{Failure rate of broadcast ranging with session duration of \SI{1}{\ms} in equidistant scenario simulation.} \label{fig:sim-equidistant-spread1ms-failure-rate} \end{figure} Now we are interested in the failure rate in a more challenging situation, where the session time is limited to $\SI{100}{us}$. With this constraint, the results in \Cref{fig:sim-equidistant-failure-rate} show that system can sustain up $20$ reflectors, beyond that point more reflectors start to fail. However, if the system has only 20 or less reflectors, it would be advantageous to reduce the epoch duration to $\SI{100}{us}$ and the system can support up to 10,000 ranging sessions per second. From \Cref{fig:sim-equidistant-failure-rate}, we also see the benefit of SIC that slightly improves the failure rate. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/sim_equidistant_failure_rate.eps} \caption{Failure rate of broadcast ranging with constrained session duration of $\SI{100}{us}$ in equidistant scenario simulation.} \label{fig:sim-equidistant-failure-rate} \end{figure} \BfPara{Impact of Number of Nodes} Now we evaluate how the number of ranging nodes in a session affects the average accuracy. Specifically, for each run, we select a fixed number of ranging nodes, then we increase the circle radius from $\SI{1}{m}$ to $\SI{25}{m}$ with reflectors randomly distributed on the circle. To fully understand the impact of collisions, we constrain all responses to $\SI{100}{us}$ (10 responses of $\SI{5}{us}$ per reflector using $\SI{100}{MHz}$ bandwidth providing 10,000 ranging epochs per second). The average distance error is collected based on all distance settings in this run. The result of one run is represented by a data point in~\Cref{fig:sim-equidistant-error-nnodes}. Recall that the Successive Interference Cancellation (SIC) is used to remove mutual interference from the combined received ranging signal at the initiator. To see the effectiveness of the SIC, we rerun the system on the same data with SIC disabled. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/sim_equidistant_error_nnode.eps} \caption{Impact of number of reflectors on broadcast ranging with constrained session duration of $\SI{100}{us}$ in equidistant simulation.} \label{fig:sim-equidistant-error-nnodes} \end{figure} \Cref{fig:sim-equidistant-error-nnodes} shows that the overall error is well below \SI{40}{\cm} and the SIC can improve the accuracy by \SI{5}{\cm} in most scenarios. Now focusing on the accuracy, we see that the accuracy degrades due to the overlapping of responses from multiple reflectors. Although our random sequences have good correlation property, massive collisions can severely degrade the accuracy or even completely destroy the signals, which are indicated by the disappearance of the tail on the red curve (without SIC). An interesting part in~\Cref{fig:sim-equidistant-error-nnodes} is the dip in both curves when the number of reflectors increases beyond 50. When collisions grow over a threshold, many reflectors become disconnected while the remaining ones yield lower errors. Therefore, the average error, as computed on the responses that actually arrived, is lower but does not mean an improved performance. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/sim_equidistant_error_distance.eps} \caption{Impact of distance on broadcast ranging with constrained session duration of $\SI{100}{us}$ in equidistant scenario simulation.} \label{fig:sim-equidistant-error-distance} \end{figure} \BfPara{Impact of Distance} We evaluate how distance affects broadcast ranging. For each run, we select a new distance and change the number of ranging nodes. The average error is computed over all the sessions. \Cref{fig:sim-equidistant-error-distance} shows how the error changes when increasing distance from $\SI{1}{m}$ to $\SI{25}{m}$. In this experiment, we also observe a boost of \SI{5}{\cm} by applying the SIC technique. The average error fluctuates around \SI{15}{\cm}, but is independent of the ranging distance. This result is aligned with the pair ranging case already seen in~\Cref{fig:outdoor-error}. \subsubsection{Random Scenario} In a generic broadcast ranging scenario, the accuracy depends mainly on the number of reflectors and their position. Far-away reflectors typically give less accurate estimation due to weaker response signal, which additionally experiences strong interference from those reflectors who are closer to the initiator. To evaluate the system in such scenario, we have run 100 scenarios with different number of reflectors randomly located at different distances within a range of \SI{30}{\m} to the initiator. We constrain the session duration to roughly $\SI{100}{us}$ such that we can observe more collisions and evaluate the effectiveness of the SIC technique. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/sim-random.eps} \caption{Scenario simulation of random positions for $9$ reflectors.} \label{fig:sim-random} \end{figure} \Cref{fig:sim-random} shows an example of a random scenario of $9$ reflectors. The left side shows the distance of each reflector to the initiator (located at the origin). The right side shows how responses arrive and collide at the initiator. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/sim_random_error_nnode.eps} \caption{Impact of number of reflectors on broadcast ranging with session duration of $\SI{100}{us}$ in random scenario simulation.} \label{fig:sim-random-error-nnode} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/eval/sim_random_failure_rate.eps} \caption{Failure rate of broadcast ranging with session duration of $\SI{100}{us}$ in random scenario simulation.} \label{fig:sim-random-failure-rate} \end{figure} The results for random scenarios are shown in \Cref{fig:sim-random-error-nnode,fig:sim-random-failure-rate}. The average error is stable around \SI{23}{\cm} when the number of reflectors exceeds $20$. From~\Cref{fig:sim-random-error-nnode}, the performance looks the same for both cases of enabling and disabling SIC. However, a closer look at~\Cref{fig:sim-random-failure-rate} reveals that without SIC there are more failures and only a small portion of reflectors succeeds, whose average estimation error is at an acceptable level. In fact, \Cref{fig:sim-random-failure-rate} shows a high failure rate of $0.57$ at $20$ reflectors when SIC is not applied, while the failure rate is below $0.1$ if SIC is used. As a comparison to the equidistant scenarios (\Cref{fig:sim-equidistant-failure-rate}), the SIC clearly performs better under clear near-far effect when the random scenarios have more close-by and more far-away reflectors. \section{System and Performance Evaluation} \label{sec:evaluation} \input{eval-setup} \input{eval-experiments} \input{eval-simulation} \subsection{Detailed Protocol Description} \label{sec:protocol-details} In this section, we present a detailed protocol description in a generic setting, where one initiator starts ranging with \emph{multiple} reflectors and learns the distance to each of them \emph{at the same time}. Broadcast ranging is a distinct feature of our protocol in comparison with existing work. To enable broadcast ranging, we allow the reflectors to send back responses that can overlap both in time and spectrum. In our protocol, each ranging session is required to start and end in the same time epoch, and each epoch serves at most one session. Within an epoch, the request is sent at the beginning, while responses are sent back at random moments. For compact presentation, we assume nodes are already loosely synchronized, i.e., they observe the same time epoch $\epoch$. \subsubsection{Random Sequences} \label{sec:sequences} Random sequences are a crucial part of our ranging protocol, as they allow ranging nodes to detect requests and responses, as well as to derive timing estimation. We focus on the correlation property of secret random sequences, as this property determines the detection capability of the system. As generated in~\Cref{eq:req,eq:resp}, a secret random sequence is obtained using a PRF (e.g., HMAC-SHA2/3), shared key, the current time epoch $\epoch$ and the sender's identifier. For the responses, we also embed an additional field for the counter $n$ to distinguish the messages. The generated sequences are modulated using BPSK modulation to create the transmitted signals. With this construction, every signal is unique and detectable to the ranging nodes, but remains indistinguishable to the adversary. \BfPara{Good Correlation Property} In terms of communication robustness, a sequence with good correlation property can be easily detected even under the presence of interference and noise. While there are good sequences with this property (e.g., m-sequences), they are not indistinguishable to the adversary from the security perspective. The security concern is our main motivation for using cryptographic operations to generate the sequences. Due to the randomness, both ranging request and response signals span the whole operating spectrum. With multiple reflectors responding at the same time, these responses potentially create signal collision at the initiator. The detection of individual responses from the combined received signal relies on the correlation property of the responses. We carry out an evaluation, where random sequences are transmitted in two scenarios: non-overlapping and overlapping. An example in~\Cref{fig:overlap-sequences} shows that our cryptographically randomized sequences possess a good correlation property that enables the initiator to easily locate individual responses under external interference and noise. \begin{figure}[h] \includegraphics[width=0.47\textwidth]{figures/overlap_sequences2.pdf} \caption{Example of 3 different sequences transmitted sequentially (non-overlapped), and simultaneously (overlapped). The non-overlapped region shows sharp and high peaks at the detected position of each sequence. The overlapped region shows lower peaks, but they are still strong and clearly detectable from interference.} \label{fig:overlap-sequences} \end{figure} \subsubsection{Boosting Interference Resilience} As seen from~\Cref{fig:overlap-sequences}, while our random sequences are well uncorrelated, the detection accuracy becomes lower when ranging is performed with more reflectors, because individual SINRs are reduced due to mutual interference. The impact is more visible to far-away reflectors whose signal power is already weak arriving at the initiator. To mitigate the impact of mutual interference, we use a Successive Interference Cancellation technique to increase the SINR of individual responses. Our technique consists of the following steps. \begin{enumerate}[wide] \item \emph{Finding strongest reflector:} We compute the peaked correlation $C^k_M$ for each reflector $\reflector_k$ and find the strongest reflector $\reflector_{\hat{k}}$ with the highest correlation value: $\hat{k}=\argmax_k |C_M^k|$. We estimate the submeter-accuracy distance for $\reflector_{\hat{k}}$ by performing the timing correction described in~\Cref{sec:tof-estimation}. \item \emph{Estimating channel attenuation:} The channel attenuation $\gamma$ (complex value) for reflector $\reflector_{\hat{k}}$ is estimated as $$\gamma=\frac{\sum_{n=0}^{N-1} r_{n+M} p_n^*}{\sum_{n=0}^{N-1}|p_n|^2}$$ where $\{p_n\}$ is the strongest reflector $\reflector_{\hat{k}}$'s response sequence, and $\{r_{n+M}\}$ is the received signal aligned to $\{p_n\}$ at the peak position $M$ found in the previous step. \item \emph{Removing strongest response:} With the channel attenuation $\gamma$, the strongest response signal arriving at the initiator is estimated to be $\{\gamma p_n\}$. We can remove it from the combined received signal and repeat iteratively on the residual signal to extract remaining reflectors. Our evaluation (\Cref{sec:evaluation}) shows accuracy improvements of broadcast ranging. \end{enumerate} \subsubsection{Response Batch Mode} In the following paragraphs, we discuss the response batch mode that supports multiple responses per reflector in the responding step. The main idea is to use random delays between responses to obfuscate the actual ToF. The response batch not only increases the ranging resiliency/security, but improves the ranging accuracy as well. \BfPara{Response Delay} We consider the responding step performed by a reflector $\reflector_k$ when it receives the initiator's request. Based on the cross-correlation computed during the request detection, the reflector starts its own timer at the peak position where the request is detected. The $n$-th response is transmitted when the timer reaches the waiting period $T^k_{W,n}$, given by \begin{equation} \label{eq:waiting} T^k_{W,n} := T\cdot(H_\key( \reflector_k,\epoch,n,W) \bmod{W}) \end{equation} where $W$ denotes the waiting window. The timer is reset when the sending of a response is complete, then the next response is scheduled. By~\Cref{eq:waiting}, the exact transmit time is determined uniquely for each reflector and each response, but it is non-deterministic to the adversary. Recall the ToF estimation in~\Cref{eq:timing2}, the random waiting period results in obfuscated ToF. This prevents the adversary from learning the actual distance even with a coarse accuracy. \BfPara{Improved Accuracy with Batch Estimation} With the response batch $\batch$ from a reflector $\reflector_k$, the initiator can perform the distance computation and obtain multiple estimations. It is noted that the actual response batch detected by the initiator may have less responses due to missed or corrupted messages. If $\tilde{\batch}$ denotes the received batch, we have $|\tilde{\batch}|\le|\batch|$. To derive the final distance result to $\reflector_k$, the initiator first finds the subset $\bestss$ of those responses whose timing estimations are closest to the median estimation of $\tilde{\batch}$, then computes the final result as the expected value of the subset $\bestss$. The size of $\bestss$ is selected as $|\bestss| =\min(|\batch|/2, |\tilde{\batch}|$. This estimation method filters out the outliers. \subsubsection{Spectrum Flexibility} Digital ranging systems in general need a large bandwidth for achieving a good accuracy. According to the Cramer-Rao bound, the accuracy of timing estimation is a function of the spectrum bandwidth used for ranging. One would achieve better accuracy with larger bandwidth. This bound implies limited accuracy for narrowband wireless ranging. In practice, however, the accuracy is far from the theoretical bound due to imperfections such as channel variations, obstructions, measurement errors~\cite{dardari09,zanella16rssranging}. For our proposed ranging protocol, we find that accuracy is rather constrained by clock resolution, which in turn depends on the sample rate. The implies that as long as a specific sample rate yields an acceptable accuracy for a ranging application, we can shrink the bandwidth while keeping the sample rate unchanged without decreasing the accuracy. This is the motivation for the \emph{upsampling mode} in our ranging protocol (\Cref{fig:system-diagram}). To flexibly narrow down the bandwidth, both request and response signals are upsampled by an interpolation factor equal to $f_S / B$, where $f_S$ is the sample rate, and $B$ is the desired signal bandwidth. The upsampled signals are sent to the RF front end for transmission. In contrast to the transmitting chain, the receiving chain of both initiator and reflectors does not include a downsampler. Instead, the expected sequence is also upsampled and directly used by the detector that performs the pattern searching at the high sample rate $f_S$. This design allows us to achieve good accuracy with much narrower bandwidth and distinguishes our ranging system from previous work. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/design-diagram-new.pdf} \caption{Ranging with upsampling mode for spectrum flexibility.} \label{fig:system-diagram} \end{figure} \section{Introduction} The mobile revolution, fundamentally changed how we access and share information. Wireless localization has greatly contributed to this revolution, by enabling access to geographically relevant information (e.g., Location Based Services). It is poised to have an even greater and more critical impact in the future of navigation systems, from self-driving cars, to air traffic coordination, to unmanned aerial systems (e.g., delivery drones), but also for a growing number of IoT and Augmented Reality applications. Secure localization received an increased interest from the research community in recent years, as practical attacks were demonstrated against a variety of system~\cite{rasmussen07secnav,TippenhauerPRC2011,RanganathanC2017,OlafsdottirRC2017}. For instance, spoofing GPS signals is easily achievable with open source software~\cite{gps-sdr-sim} and lowcost SDR platforms. This led to demonstration of hijacking drones~\cite{NohKSSKCK2019}, but also the occurrence of real world incidents were detected in multiple places~\cite{gps-spoofing-new-scientist,gps-spoofing-maritime}. Augmenting GPS with Inertial Navigation System has proven to be insecure too~\cite{NarainRN2019}. Localization systems for air traffic also proved to be insecure with vulnerabilities including landing an airplanes off the runway by attacking the ILS~\cite{SathayeSRN2019}, or inserting a ghost airplane in the sky exploiting weaknesses in ADS–B~\cite{CostinA2012,StrohmeierSPLM2017}. Ranging is an important building block of localization systems as it enables two devices to estimate their relative distance. Various techniques were proposed and deployed over the years. These include energy-based ranging, phase-based ranging, and time-based ranging. The accuracy of most these techniques and systems is expected to be at least 1 meter~\cite{zafari2017survey}. However, existing techniques, including the recent IEEE 802.15.4 addition of Scrambled Timestamp Sequence (STS), remain vulnerable to various attacks. In essence, cryptographic techniques alone are insufficient to defend against spoofing and relaying signals without a common trusted time reference, and interactive protocols are subject to jamming and sophisticated replaying of signals by adversaries, if such signals can be detected. In the security research community one class of protocols focused on distance-bounding protocols where a party proves that it is within a given distance from a verifier. Distance-bounding protocols are important for various access control applications. Proposed techniques aim at mitigating distance-decreasing attacks such as early-detect/late-commit (ED/LC)~\cite{rasmussen08db,poturalski11db,manuel10dd}, and denial of service (DoS)~\cite{poturalski10cicada}. Other protocols were proposed to protect the time of arrival (ToA) and enhance the integrity of the ranging information~\cite{rasmussen07secnav,tippenhauer09temp}, and more recently a security primitive for securing messages ToA derivation~\cite{leu20mtac}. In this work, we propose a set of integrated techniques and protocols to secure broadcast ranging. The proposed techniques are designed to operate over flexible spectrum bandwidth and can be implemented both on an FPGA and/or host device without performance loss. For this we exploit both sub-sampling and scheduled burst-mode capabilities of modern RF SDR peripherals. The techniques and protocols provide both anti-jamming protection and support of multi-device (broadcast) ranging thanks to stealthy cryptographically randomized messages in time and code, as well as successive interference cancellation. We extensively evaluate the performance of the proposed techniques, analytically, with simulations for scalability (to tens of simultaneous reflectors), and experimentally over-the-air on SDR platforms. We in particular explore the trade-offs in terms of spectrum usage, interference resilience, accuracy, and scalability. Our techniques are implemented on the USRP X310~\cite{wiki:usrpX310}, a popular Software Defined Radio (SDR) platform, both on the FPGA and host using GNU Radio~\cite{url:gnu-radio} and our own extension of UHD~\cite{wiki:uhd} for a fine control of the added capabilities to the X310 FPGA. We demonstrate flexibility to operate over a spectrum of 25-100MHz, achieving below 20 cm accuracy with bursts of 5 us. Our contribution are summarized as follows: \begin{itemize} \item We develop, to our best knowledge, the first SDR ranging system with secure broadcast mechanism at high accuracy on a flexible narrow bandwidth. \item Stealth and security are achieved through cryptographically randomized messages in time and code. \item Flexibility and scalability are achieved through upsampling and successive interference cancellation. \item The protocol is designed and implemented on FPGA, host (with minimal FPGA modifications), and hybrid. \item We demonstrate through extensive performance evaluations (over-the-air and simulations for scalability) that we can achieve an accuracy below 20cm on a wide range of SNR (as low as 0dB), spectrum 25MHz-100MHz, 20 simultaneous reflectors (with sessions of $\SI{100}{us}$), and over 100 reflectors with sessions of $\SI{1}{ms}$. \end{itemize} \section{Secure Broadcast Ranging Protocol} \label{sec:protocol} In this section, we provide a detailed description of our ranging protocol with the focus on ranging accuracy. As a time-based ranging protocol, the distance computation in our protocol relies on the precision of recording the requesting and responding time. While sharing the basic mechanism with existing time-based ranging systems (such as IR-UWB), where the Time of Flight is measured to compute the distance, our underlying physical layer processing techniques are different and achieve comparable accuracy with a \emph{significantly smaller bandwidth} (\SI{100}{MHz} and below). We dedicate \Cref{sec:security} for the security analysis of the protocol. We first consider a simple ranging session consisting of a single request and a single response. For simplicity of presentation, we assume both initiator and reflector know the sequences used for request and response. We present the timing challenges and techniques to estimate the distance with sub-meter accuracy. We then describe the synchronization procedure used to establish the requirements for the upcoming ranging session. Finally, a full ranging session consisting of multiple responses that offer both higher accuracy and better security is presented. This section is finally ended with discussion bandwidth efficiency improvements. \input{distance_computation} \input{synchronization} \input{full_protocol} \section{Security Analysis} \label{sec:security} In this section, we analyze the security of our proposed protocol with the focus on the attacks introduced in~\Cref{sec:background-attacks}, and threat model described in~\Cref{sec:threat-model}. \subsection{Denial of Service Attack} Denial of Service (DoS) in ranging systems is generally carried out in the form of a jamming attack with the adversary emitting interfering signals to corrupt the ranging messages. In practice, a jammer with powerful capabilities (such as high power, continuous jamming) would be easily detected. Therefore, we consider a bounded adversary with limited jamming power similar to ranging nodes. We first consider a blindly intermittent jamming (e.g., cicada attack). By the use of random sequences, the SINR of the request and response signals is increased by roughly $L$ times with $L$ being the sequence length. For instance with $L=1024$, the SINR is improved by approximately $\SI{30}{dB}$. The high resilience against interference is verified in our evaluation (\Cref{sec:evaluation}), which implies their robustness against jamming attack. For $\sync$ messages, due to their infrequent use, the hit probability remains low in comparison with other transmissions in the system. For systems having ranging as an additional feature, this attack would be more efficient if it focused on non-ranging messages. Consider a selective jamming attack, where the adversary targets the ranging messages. To perform this attack, message detection is required. However, $\req$ and $\resp$ are indistinguishable by the adversary, and the $\sync$ can only be detected at the end of the message (due to the postamble) but too late for jamming. \subsection{Distance Enlargement Attack} We consider the distance enlargement attack carried out by a man-in-the-middle adversary. In this attack, the adversary aims to delay the $\req$ and $\resp$ messages (the $\sync$ message is not used for distance computation). Since the adversary cannot generate a legitimate random sequence, delaying these messages needs to be done by recording a transmission and replaying it at a later time. However, simply replaying a message would be detected by a duplicate check at the ranging nodes. A successful distance enlargement attack, therefore, would require both blocking and replaying the ranging messages at the same time. In the following, we investigate the possibility of this attack. First, we emphasize that blocking and recording a signal are mutually exclusive operations. In addition, as the adversary cannot predict the random sequence, blocking a signal is performed in the form of overshadowing (jamming) rather than annihilation. We consider a scenario, where the adversary records a portion of the signal from the beginning, then blocks the rest. Concretely, let $\{p_1\ldots{}p_L\}$ be a transmitted response sequence. Assume the adversary records the portion $\mathcal{X}=\{p_1\ldots{}p_N\}, N\le{}L$, and blocks the rest by a jamming signal $\{q_1\ldots{}q_{L-N}\}$. Note that $\mathcal{X}$ can be used as (a part of) the jamming signal. The (partially) blocked signal arriving at the initiator is $\mathcal{Y}=\{p_1\ldots{}p_N,p_{N+1}+q_1\ldots{}p_L+q_{L-N}\}$. We state that replaying $\mathcal{X}$ succeeds if $\mathcal{X}$ is detected while $\mathcal{Y}$ is not detected by the initiator. We note that the earliest time for replaying $\mathcal{X}$ is right after the recording, i.e., the first $N$ symbols of the jamming signal can be the recorded portion: $\{q_1\ldots{}q_N\}=\mathcal{X}$. Now if $ToF_{\max}$ is the maximum ToF allowed in the ranging system, $\mathcal{X}$ cannot be replayed later than $N_{\max}=ToF_{\max}/T$ samples, or the initiator will detect the attack. This requirement limits the size of $\mathcal{X}$ to $N_{\max}$, that is $|\mathcal{X}|\le{}N_{\max}$. Recall the cross-correlation computation in~\Cref{eq:xcorr-norm}, the peak correlation value of $\mathcal{X}$ would be $|\mathcal{X}|/L\le{}N_{\max}/L$ times lower than the peak of a legitimate sequence. In a real-world ranging system, if the allowed distance limit is $\SI{300}{m}$, the sample rate is $T=\SI{100}{MHz}$, and sequence length $L=1024$, we have $N_{\max}=100$ and $N_{\max}/L<1/10$. This ratio is too low for $\mathcal{X}$ to be detected. Consider the condition for $\mathcal{Y}$ to be not detected. Recall the good correlation property of our random sequences, the jamming portion is uncorrelated with the overshadowed portion of $\mathcal{Y}$. Therefore, as long as the jamming power is of same order as ranging signals, the initiator would detect $\mathcal{Y}$ with high probability (spreading gives $\approx30dB$ gain). In conclusion the probability for a successful replay for the distance enlargement purpose is significantly low. \subsection{Distance Reduction Attack} To successfully carry out a distance reduction attack, the adversary must be able to guess the signal and relay it earlier than the legitimate one. However, since ranging responses in our protocol are indistinguishable, guessing the signal with probability higher than a random guess is impossible. Therefore, similar to the distance enlargement, the distance reduction requires the adversary to record the signal. This requirement implies the signal (or any part of it) cannot be replayed earlier than its recording time, or equivalently our protocol is robust against this attack. \subsection{Sniffing Attack} A passive attacker aims at learning the distance to and between ranging nodes needs to detect the signal and infer the distance without the knowledge of secrets used by ranging nodes. We first see that with the use of encryption, the synchronizing data in the $\sync$ message cannot be extracted. For the ranging request and response, since these signals are indistinguishable by the use of a secure PRF, the only information available to the adversary is the energy of the signals, which can be obtained by tracking the communication channel. We consider an adversary with a high-precision energy detector (e.g., using a high-quality low-noise amplifier and much higher sample rate than the rate used by ranging nodes). By tracking the energy of the request and response arriving at itself, the adversary can obtain the ranging session's start time $\hat{t}_S$ and end time $\hat{t}_R$ in its own time reference. \begin{align} \hat{t}_S & = t_S+ToF_{\initiator\rightarrow\adversary} \\ \hat{t}_R & =t_S+ToF+T_W+T_{\resp}+ToF_{\reflector\rightarrow\adversary}. \end{align} The ToF between the initiator and reflector can be written as \begin{equation} \label{eq:tof-adversary} ToF = \hat{t}_R - \hat{t}_S - T_W - T_{\resp} - ToF_{\reflector\rightarrow\adversary} + ToF_{\initiator\rightarrow\adversary} \end{equation} It is seen from \Cref{eq:tof-adversary} that for a generic scenario, when the adversary is not aware of its distance to the initiator and reflector (i.e., unknown $ToF_{\initiator\rightarrow\adversary}$ and $ToF_{\reflector\rightarrow\adversary}$), the ToF cannot be reliably estimated by the adversary even if the response delay $T_W$ is known. Now consider special cases where $ToF_{\initiator\rightarrow\adversary},ToF_{\reflector\rightarrow\adversary}$ can be cancelled out, for instance the adversary is located in the initiator's proximity ($ToF_{\initiator\rightarrow\adversary}=0$ and $ToF_{\reflector\rightarrow\adversary}=ToF$) or right in the middle between initiator and reflector ($ToF_{\initiator\rightarrow\adversary}=ToF_{\reflector\rightarrow\adversary}$), the random waiting period $T_W$ would hinder the adversary from distance estimation with an acceptable accuracy. It is worth noting that in the above equations, we already ignore the unknown timing error $T_E$, which would even increase challenges to the adversary. \subsection{Synchronization} \label{sec:sync} In the distance computation presented in~\Cref{sec:distance-computation}, the discussion assumes a that the request and response sequences are known in advance to both participant nodes. In our protocol, this condition is realized by synchronization. \BfPara{Loose Synchronization} The initiator performs the synchronization step at the beginning of its current time epoch $\epoch$. When receiving the $\sync$ message, given by \Cref{eq:sync}, the reflector updates its current epoch to $\tau$. If the processing delays at the initiator and reflector are $\epsilon_{\initiator}$ and $\epsilon_{\reflector}$, the mismatch between the initiator and reflector's clocks after the epoch update is $\Delta=\epsilon_{\initiator}+\epsilon_{\reflector}+ToF+T_E$. Due to $\Delta$, the initiator and reflector are not perfectly aligned. Nevertheless, if the epoch period is large enough ($\Delta\epoch\gg\Delta$), synchronization allows both sides to be loosely synchronized with respect to the time epoch and agree on the request and response sequences used for the upcoming ranging session. In fact, the mismatch $\Delta$ is in order of a few microseconds, while the epoch period $\Delta\epoch=\SI{1}{second}$ in our system. We emphasize that ranging accuracy is not impacted by $\Delta$, as the reflector's clock is not used for the distance computation. \BfPara{Infrequent Synchronization} As long as nodes are loosely synchronized, this state remains valid in many subsequent time epochs. Synchronization messages are not necessary for each ranging session. While there are various methods to optimize how often the synchronization should be performed, our simplified approach is to schedule periodic synchronizations such that the clocks are not mismatched by more than half of the time epoch duration. Specifically, if $\delta$ is the clock drift (time difference per second) between nodes, resynchronization is required after $T_{\var{resync}}=\frac{1}{2}\Delta\epoch/\delta$ seconds. With a typical value of $\delta=\pm\SI{40}{ppm}$, we only need to synchronize roughly every $3.5$ hours. It is possible that a node can sometimes miss the synchronization message (e.g., due to environment conditions). When this situation is detected, for instance, being unsynchronized for significantly more than half an epoch, resynchronization can be triggered. \BfPara{Postamble} The $\sync$ message has a special structure, in which the end of the message is appended by a postamble. The postamble is a sequence with a good correlation property allowing easy signal detection. In contrast of a preamble, the postamble prevents the payload of the $\sync$ message from being attacked, because when the adversary detects the $\sync$, the data is already received. The tradeoff is the increased processing overhead at the receiver, to store the whole message before decoding. As $\sync$ is a short frame, we consider this tradeoff worth the improved resiliency.
proofpile-arXiv_059-15761
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Background} We now present the preliminaries for understanding Vitis \cite{XilinxVitis:2020} and memory on FPGAs. FPGA, with the arrival of the next golden decade of architecture, its application becomes more and more extensive such as AI and HPC\cite{zhang2017improving,turkington2006fpga,wei2017automated,zhang2017frequency,chen2016accelerating}, as well as database\cite{arcas2014empirical,owaida2017centaur,mueller2009fpga} and graph-related\cite{shao2019improving,Khoram:2018,Zhang:2018} algorithms; most of these algorithms have complex implementations and need to read and write a large amount of data. Hence, high-level languages such as High-Level Synthesis(HLS) and OpenCL in FPGA are the primary implementation method to simplify and migrate AI and other algorithms\cite{zhang2017improving,turkington2006fpga,gautier2016spector,cong2011high,zhou2018rosetta,hara2008chstone,muslim2017efficient,kim2017heterogeneous,de2018designing, neuendorffer2013building,daoud2014survey,alias2013optimizing}. At the same time, with the realization of such algorithms, standard DDR-based FPGAs have gradually entered performance bottlenecks\cite{sohi1991high,mi2010software,o2017fine,panda1998incorporating}. With the next generation of storage showing outstanding performance and characteristics on the CPU and GPU\cite{manegold2002generic}, the HMC implementation in FPGA has demonstrated excellent performance in graph-related algorithms\cite{Zhang:2018,Khoram:2018}, and HBM is now the strength of FPGA High-performance off-chip memory\cite{du2020high}. Samsung has presented HBM, which is on the basis of DRAM 2.5D stacking and package technology and multiple DRAM dies interconnected through Through Silicon Via (TSV)\cite{jun2017hbm}. Take the FPGA-based HBM architecture implemented by Xilinx as an example\cite{alyushin2018bit}. It utilizes 2 HBM Stacks to embed its memory control system into the logic block of the FPGA. The memory controller contains 16 memory channels and 16 memory channels. The 32 pseudo channels expand to 32 AXI channels from the FPGA logic implementation\cite{XilinxAXI:2017,ArmAXI:2017}. In terms of tools, to make FPGA implementation closer to CUDA, MPI, and other GPU and CPU implementations, Xilinx designed the Vitis platform and the Xilinx Runtime (XRT)\cite{xrt:2019} that supports the platform to integrate RTL, HLS, and OpenCL\cite{XilinxVitis:2020,XilinxUltraFast:2020}. The Vitis platform has adapted optimistic accessing memory modules such as HBM to the RTL design and high-level language implementations\cite{XilinxAlveo:2020,XilinxVitis:2020,XilinxHBMFPGA:2019}. This method has achieved perfect results in AI and HPC. In addition, XRT provides an interactive subsystem implemented by HOST and kernel. This system simplifies the implementation of PCIe and DMA, making it more convenient for the host to call FPGA, and its interactive efficiency is very close to that of GPU calls implemented by CUDA\cite{cong2018understanding}. There have even been several times of efficiency improvements in some AI-related implementations. \section{Benchmarking} When benchmarking memory, we plan to experiment with the throughput and latency under possible application scenarios. We need to carefully design the benchmark code to precisely show the effect of each parameter. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subfloat[ HLS\phantom{ }]{% \includegraphics[width=2.3in]{figure_eps/latency_1.eps}% \label{fig:latencys1_HLS}% } \subfloat[ RTL\phantom{ }]{% \includegraphics[width=2.3in]{figure_eps/RTL1.eps}% \label{fig:latencys1_RTL}% } \subfloat[ DDR\phantom{ }]{% \includegraphics[width=2.3in]{figure_eps/ddr_p1.eps}% \label{fig:latencys1_ddr}% } \caption{Latency of Various Implementation} \label{fig:latencys_Imp} \end{figure} We can compare the performance between Vitis and Verilog to show the performance cost when using Vitis, which brings high programmability. The actual performance of memory access built on HBM and DDR, since each channel of the HBM memory access benchmark architecture implements on HLS, can point to an independent kernel. Assuming that the bit width of the $i$ channel is $W_i$, the memory access round Is $I_i$, the clock frequency is $F$, and the system running time (stated by Host) is $T$, then the actual memory access bandwidth achieved by HLS can be obtained.the Equation~\ref{equation:system bandwidth} shows system bandwidth. \begin{equation} \begin{aligned} BW = \frac{\sum_{i=0}^{N-1}I_n * W_n}{T*8*10^9} \end{aligned} \label{equation:system bandwidth} \end{equation} In order to compare the deviation between HLS implementation and theoretical performance, we also need to derive the theoretical memory access bandwidth. Since AXI protocol is the actual implementation of memory access on FPGA which and each pair of AXI buses has at most one transmission request per clock, according to the HBM maximum, When bit width $W=256$ and all $N=32$ channels are activated automatically, the Equation~\ref{equation:theoretical bandwidth} shows the origin of theoretical bandwidth. \begin{equation} \begin{aligned} BW = \frac{N*W*F}{8*10^9} \end{aligned} \label{equation:theoretical bandwidth} \end{equation} \subsection{Latency Testing} The memory access latency accurately of consecutive memory read transactions when the memory controller is in an “idle” state, i.e., where no other pending memory transactions exist in the memory controller such that the memory controller can return the requested data to the read transaction with minimum latency. We aim to identify latency cycles of page hit, page closed, and page miss. The “page closed” state occurs when a memory transaction accesses a row whose corresponding bank is closed, so the row Activate command is required before the column access. The “page miss” state occurs when a memory transaction accesses a row that does not match the active row in a bank, so one Precharge command and one Activate command are issued before the column access, resulting in maximum latency. The “page hit” state occurs when a memory transaction accesses a row that is open in its bank, so no Precharge and Activate commands are required before the column access, resulting in minimum latency. \begin{figure}[h] \centering \includegraphics[width = 2.5in]{figure_eps/outstanding.eps} \caption{Effect of the number of outstanding channels} \label{fig:outstanding} \end{figure} As shown in Figure~\ref{fig:latencys_Imp}, to compare the impact of different implementations on memory access, this paper implements three different models of memory access latency benchmark. First, as shown in Figure~\ref{fig:latencys1_HLS}, this implementation is based on the HLS under the Vitis architecture, which mainly performs latency benchmark on the HBM storage architecture. The second, as shown in Figure~\ref{fig:latencys1_RTL}, is under the Vitis-based RTL implementation, and the latency benchmark is also performed on HBM. The last one, as shown in Figure~\ref{fig:latencys1_ddr}. The implementation is the same as the first one, but it is a latency benchmark for DDR memory architecture. Through these three tests, the memory access latency under the HLS implementation is relatively close. Because the memory access structure in the HLS has the Gmem and the controller's cache layer, and there is a pre-judgment of the memory access behavior in the HLS. As a register buffer group, the delay based on HLS will be more significant than RTL, while the delay based on RTL will not. At the same time, DDR memory access latency is more stable than HBM, and the number of times high latency is less than HBM, which means that its page and bank are much more extensive than HBM. \begin{figure}[htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-15pt} \includegraphics[width = 2.5in]{figure_eps/latency_2.eps} \caption{Memory Access Latency With Various Stride} \label{fig:latencys_str} \end{figure} As shown in Figure~\ref{fig:latencys_str}, this article uses different strides to test the page behavior of latency. Compared with the RTL implementation, the unit size of the data structure of the memory access pointer during implementation determines the step size of the HLS-based implementation, so Under the 256-bit memory access data structure, we tested the memory access latency of $stride=1,2,3,4,8,9,10,18$, and we can see that the buffer of the HLS buffer so that the Page Hit and Page Close latency of the memory access is not significantly different, but the primary latency data is not much distinctive, and the peak of the maximum latency has a significant shift with stride. \begin{table}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-12pt} \caption{latency and channel} \label{tahab:graphs} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirowcell{2}{channel} &\multicolumn{2}{c|}{Minimums Latency} &\multicolumn{2}{c|}{Maximum Latency} &\multirowcell{2}{Average} \\ \cline{2-5} &MIN&AVG&MAX&AVG& \\ \hline \hline 0&58& 58.2391& 174& 106.7290& 60.5851\\ \hline 2&58& 58.2846& 174& 106.1408& 60.5856\\ \hline 4&58& 58.3060& 177& 106.1307& 60.5861\\ \hline 6&58& 58.2861& 174& 106.3653& 60.5860\\ \hline 8&58& 58.4740& 174& 104.2915& 60.5869\\ \hline 10&58& 58.2483& 176& 106.4918& 60.5850\\ \hline 12&71& 71.7729& 185& 157.5382& 74.1615\\ \hline 14&58& 58.3006& 175& 106.1307& 60.5859\\ \hline 16&58& 58.2501& 174& 106.3061& 60.5851\\ \hline 18&58& 58.2529& 175& 106.3207& 60.5850\\ \hline 20&58& 58.2723& 174& 106.2391& 60.5855\\ \hline 22&58& 58.2791& 174& 106.0310& 60.5852\\ \hline 24&58& 58.3348& 177& 105.4133& 60.5861\\ \hline 26&58& 58.2773& 174& 106.1195& 60.5854\\ \hline 28&58& 58.2276& 174& 106.6565& 60.5848\\ \hline 30&58& 58.2460& 176& 106.4774& 60.5850\\ \hline \hline \end{tabular} \end{table} \subsection{Effect of Parameter} After the general latency benchmark, we will test some specific parameters based on the HLS implementation. The HLS high-level language features determine the influence of these parameters on memory access. Compared with the latency benchmark, the benchmark of these parameters will be more high-level HBM. Performance and characteristics under HLS implementation. Combining high-level language features and HLS's access characteristics to the Axi bus system, we will test the memory access performance of HBM from the following four aspects: 1. Data structure benchmark based on memory access. 2. Memory burst based on the AXI bus structure 3. Memory access test based on the outstanding data channel 5. Memory access test based on HBM's address mapping policy (AMP). We need to pay great attention to the memory subsytem on the FPGA, since memory transactions consume the majority of overall computing power and since the memory performance can be the bottleneck of overall performance. In the following, we quantitatively exiname the effect of each parameter related to the external memory instructions. In Xilinx FPGA implementation, the main memory access method for off-chip memory is AXI bus memory access, including AXI3, AXI4, and AXI stream. Generally speaking, AXI3 master memory access is the most common implementation method of the bus. Therefore, whether in RTL or HLS or even OpenCL, memory access performance is related to AXI related parameters. In HLS, the AXI memory access parameters include the explicit parameter Unit Size, and the implicit parameters Latency, Depth, Burst Size, and Outstanding Transactions. These four main parameters. Depth is a parameter that assumes the memory access depth and memory size, which is for simulation, so it does not affect actual performance. \subsubsection{Effect of Latency} Latency is a parameter assumed for memory access delay, and its length will affect the length of the repeated implementation pipeline in the For Loop. \subsubsection{Effect of Unit Size} Unit Size is an explicit parameter, and its impact on throughput is to show the characteristics of the data structure in the implementation. Generally speaking, since the primary storage unit of memory is bytes, to simplify and do representative experiments, we use 32bit \~ 512bit is the data structure of the benchmark (512bit is the maximum Unit Size of AXI single transmission). For comparison, we also test the int16 (structure), which shows in the X-axis of Figure~\ref{fig:unit_size}. The difference from 512bit is that one unit is 512bit, and the other is 16 int units. Besides, show the resource consumption when varying Unit Size. As shown in Figure~\ref{fig:unit_size}, in the benchmark of data structure, as the length of unit size increases, the throughput of the memory access also improves, which is obviously with linear growth. However, the efficiency of using Arbitrary Integer Precision Types directly is not as good as using Data Structure; this is because of Arbitrary Integer Precision Types in HLS If disusing the bit range selection and bit operation, the system will construct an operation unit with the same length as the Integer. Compared with Structure, the length of the data unit inside the Data Structure define its operation unit, that is, if Arbitrary Integer Precision Types is 512bit and Structure is 16 Integer, then the system will set the operation length to 512bit for Arbitrary Integer, and Structure will have 16 independent Integer units, which results in higher performance in throughput. As shown in Figure~\ref{fig:unit_size}, as lengthier Unit Size, the throughput is significantly improved, which conforms to the linear correlation. \subsubsection{Effect of Burst Size} As shown in Figure~\ref{fig:burst_size}, from the architectural point of view, U280's access to HBM is wholly primary on the AXI bus structure, so the optimization of AXI memory access parameters can also optimize the HBM memory access performance. The implementation based on the Vitis platform (Whether implemented in HLS or RTL) There are restrictions on the access parameters of AXI (AXI4 master), and the main controllable parameter is the burst-related transmission signal. In the HLS implementation, we can easily set the burst length to test the seven transmission lengths that the burst is equal to 2, 4, 8, 16, 32, 64, 128, and 256. \begin{table}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-12pt} \caption{Utilization of Burst Size(Dataflow) Benchmark} \label{tab:UtilizationDataflowB} \begin{tabular}{|c|c|c|c|} \hline \multirowcell{2}{Burst\\ Size} & \multicolumn{3}{c|}{Utilization}\\ \cline{2-4} & LUT& FF&BRAM\\ \hline 2 & 16.1\% & 11.6\% & 11.4\%\\ \hline 4 & 16.3\% & 11.7\% & 11.4\%\\ \hline 8 & 16.2\% & 11.7\% & 11.4\%\\ \hline 16 & 16.2\% & 11.7\% & 11.4\%\\ \hline \end{tabular} \end{table} As shown in Figure~\ref{fig:burst_size}, the burst size has a limited impact on the continuous For Loop and Dataflow memory access implementation, but from the Table~\ref{tab:UtilizationDataflowB} and Table~\ref{tab:UtilizationLOOPB}, the BRAM consumption increases as the burst size increases. Therefore, the burst size has less impact on the HLS implementation that can issue a transmission request every cycle. \begin{table}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-12pt} \caption{Utilization of Burst Size(Loop) Benchmark} \label{tab:UtilizationLOOPB} \begin{tabular}{|c|c|c|c|} \hline \multirowcell{2}{Burst\\Size} & \multicolumn{3}{c|}{Utilization}\\ \cline{2-4} & LUT& FF&BRAM\\ \hline 2 & 21.8\% & 23.7\% & 37.4\%\\ \hline 4 & 21.8\% & 23.7\% & 37.4\%\\ \hline 8 & 21.8\% & 23.7\% & 37.4\%\\ \hline 16 & 21.8\% & 23.7\% & 37.4\%\\ \hline 32 & 21.7\% & 23.7\% & 37.4\%\\ \hline 64 & 21.6\% & 23.6\% & 49.8\%\\ \hline \end{tabular} \end{table} \subsubsection{Effect of Outstanding Transactions} AXI-based memory access behavior, in addition to the explicit burst parameter, there is an implicit outstanding memory access parameter, which sets the number of cache channels to cache multiple requests of memory access in parallel, which can alleviate memory access performance degradation caused by memory access latency. The Outstanding parameter is transparent in the implementation of RTL; as shown in Figure~\ref{fig:outstanding}, its implementation is transparent to the AXI port parameters. However, for the HLS implementation, this parameter can be set and tested. According to its characteristics, we have the corresponding parameters for outstanding benchmark, including 4, 8, 16, 32, 64, 128, 256, and other six transmission channel parameters for the benchmark. \begin{table}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-12pt} \caption{Utilization of Outstanding Transaction(Dataflow) Benchmark} \label{tab:UtilizationOutstandingD} \begin{tabular}{|c|c|c|c|} \hline \multirowcell{2}{Outstanding\\Transaction}& \multicolumn{3}{c|}{Utilization}\\ \cline{2-4} & LUT& FF&BRAM\\ \hline 2 & 21.8\% & 22.1\% & 37.4\%\\ \hline 4 & 21.9\% & 22.1\% & 37.4\%\\ \hline 8 & 21.9\% & 22.1\% & 37.4\%\\ \hline 16 & 21.9\% & 22.1\% & 37.4\%\\ \hline 32 & 21.9\% & 22.1\% & 37.4\%\\ \hline 64 & 22.0\% & 22.1\% & 45.6\%\\ \hline \end{tabular} \end{table} As shown in Figure~\ref{fig:outstanding}, the impact of Outstanding Transactions on Dataflow and For Loop is quite apparent, especially Dataflow, which is in line with linear growth. However, as shown in Table~\ref{tab:UtilizationOutstandingD}, the growth of Outstanding Transactions is also positively correlated with BRAM consumption, But because BRAM bandwidth and depth are discrete, the change is not significant. \subsubsection{Effect of Stride} In the stride benchmark, we use the standard for Loop to test, such as Algorithm~\ref{alg:Stride memory access}, we used the Host to transport the address offset parameter to the implementation on board. then we use the standard For Loop to test the performance on different strides, which we consider how to influence throughput, so we do not limit burst and outstanding in this benchmark. In the sequential read and write benchmark, we also test it with a standard For Loop. Also, to obtain the highest performance, we do not limit burst and outstanding and restrict the stride to 1. Simultaneously, to obtain its performance in reading, writing, and reading and writing, we will also test these three states in sequence. \subsubsection{Effect of number of kernels} The number of kernels in HLS is also an interesting parameter that has an impact on bandwidth. Under the same memory channel usage (32 channels), the impact of different kernel numbers on Throughput decreases as the kernel increases. As shown in the Table~\ref{tab:KernelThroughput}, we tested 1, 2, 4, 8, 16, 32 kernels, of which 1, 2 kernels use Dataflow, and the others use For Loop. It also shows that the performance is better when there are fewer kernels. \begin{table}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-12pt} \caption{Throughput Benchmark of Number of Kernels} \label{tab:KernelThroughput} \begin{tabular}{|c|c|c|c|c|} \hline \multirowcell{2}{Number \\of Kernel}& \multirowcell{2}{Throughput\\ (GB/s)} & \multicolumn{3}{c|}{Utilization}\\ \cline{3-5} & & LUT& FF&BRAM\\ \hline 32 kernel & 136.093 & 21.6\% &11.9\%&36.6\%\\ \hline 16 kernel& 187.093 &21.8\% &23.7\%&37.4\%\\ \hline 8 kernel& 373.381 &20.5\%&22.8\%&37.4\%\\ \hline 4 kernel& 406.523 &21.9\%&21.8\%&37.4\%\\ \hline 2 kernel$^*$& 418.116 &16.5\%&11.9\%&11.8\%\\ \hline 1 kernel$^*$& 421.691 &16.1\%&11.6\%&11.4\%\\ \hline \end{tabular} \end{table} \subsubsection{OLD!!!!} \begin{figure}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-15pt} \includegraphics[width = 2.5in]{figure_eps/unit_size.eps} \caption{Throughput With Various Unit Size} \label{fig:unit_size} \end{figure} \begin{figure}[htbp] \includegraphics[width=2in]{figure_eps/stride1.eps} \caption{Throughput With Various Stride (Loop)} \label{fig:stride_pipe}% \end{figure} \begin{figure}[htbp] \includegraphics[width=2in]{figure_eps/stride2.eps} \caption{Throughput With Various Stride (Dataflow)} \label{fig:stride_loop}% \end{figure} \begin{figure}[t] \captionsetup[subfigure]{justification=centering} \centering \subfloat[ \phantom{HLSHLSHLHSLH HLS. HLSHHHHH} ]{% \includegraphics[width=2.5in]{figure_eps/burst1.eps}% \label{fig:burst_Pipeline}% } \subfloat[ \phantom{HLSHLSHLHSLH HLS. HLSHHHHH} ]{% \includegraphics[width=2.5in]{figure_eps/burst2.eps}% \label{fig:burst_LOOP}% } \caption{Throughput With Burst Size} \label{fig:burst_size} \end{figure} In the random performance benchmark, to cover the impact of different random addresses on the memory access performance, we use two different address generation methods for memory access to test the random performance: random generator and point chasing\cite{weisz2016study}. Random generator refers to directly using to generate random addresses for memory access on board. Since there is no standard random number generation library for HLS-based implementations, we use line feedback shift register (LSFR)~\cite{krawczyk1994lfsr,panda2012fpga,schellekens2006fpga,tsoi2003compact} for random number generation, such as Algorithm~\ref{alg:LFSR} and Algorithm~\ref{alg:point_chasing}, due to the random address generated independently, we use a pipeline to conduct this random performance test. Point chasing uses random numbers to generate a random linked list, stores the linked list in HBM (Host completes this part), and then uses the random linked list in the HLS implementation to obtain random performance. \begin{algorithm}[htbp] \caption{LFSR random generate} \Input{The seed data $SEED$,The LSFR initial signal $START$, Maximal-length polynomials for LSFR $x^{e_0}+x^{e_1}+...+x^{e_{k}}+1$} \Output{the current random number $LFSR$} \label{alg:LFSR} \If{$START$ is equal to $1$}{ $LSFR \gets SEED$; } \Else{ \ForEach{$j \in [0,k]$} { $BITS \gets BITS \And LSFR.Getbit(e_0-e_j)$;\\ } $LSFR >> 1$;\\ $LSFR.Setbit(e_0,BITS);$ } \end{algorithm} As shown in Figure~\ref{fig:unit_size}, in the benchmark of data structure, as the length of unit size increases, the throughput of the memory access also improves, which is obviously with linear growth. However, the efficiency of using Arbitrary Integer Precision Types directly is not as good as using Data Struct; this is because of Arbitrary Integer Precision Types in HLS If disusing the bit range selection and bit operation, the system will construct an operation unit with the same length as the Integer. Compared with Struct, the length of the data unit inside the Data Struct define its operation unit, that is, if Arbitrary Integer Precision Types is 512bit and Struct is 16 Integer, then the system will set the operation length to 512bit for Arbitrary Integer, and Struct will have 16 independent Integer units, which results in higher performance in throughput. \begin{algorithm}[htbp] \caption{Point chasing address benchmark} \Input{The total transaction $I$, memory data channel ${H_n}$} \Output{memory data out $D$,memory access address$ADDR$} \label{alg:point_chasing} $ADDR \gets 0$ \ForEach{$i \in [0,I-1]$}{ $D\gets {H_n}[ADDR]$;\\ $ADDR \gets D$; } \end{algorithm} \section{Optimizing Memory Access Pattern} The main goal of throughput testing is to examine the exact relationship between throughput and parameter \& resource consumption. For each load/store site, such a relationship can guide the FPGA programmer who intends to use Vitis (generally HLS) to choose the right optimization level that not only meets throughput requirement but also consumes as few resources as possible. In the benchmark phase, to reduce the impact of actual implementation on performance, we will test burst and outstanding benchmark under the two architectures of standard For Loop and Dataflow at the same time. For the standard data structure benchmark, since it has nothing to do with implementation, it will only be tested in a standard For Loop. For dependency false, we use Dataflow for this kind of test with a closer relationship with the HLS implementation, which has a relatively high degree of parallelism. After completing the standard benchmark, we consider the impact of some actual implementations on performance. Because of the above benchmarks on latency and some parameters, to shield the performance impact of address jumps, all we consider are address steps Sequential read/write is 1. So next, we will perform multiple tests on the memory access performance of multiple different step sizes, standard sequential read/write, random read/write, and other conventional applications such as AI and other HPC architectures in HBM. \begin{algorithm}[h] \caption{Stride Address Benchmark} \Input{The total transaction $I$,Work group size of HBM channel $G$,The stride of address $S$, HBM data channel ${H_n}$} \Output{HBM data out $D$,HBM access address$ADDR$} \label{alg:Stride memory access} $ADDR \gets 0$ \ForEach{$i \in [0,I-1]$}{ $D\gets {H_n}[\left(ADDR+S\right) \mod G]$; } \end{algorithm} As shown in Figure~\ref{fig:stride_pipe} and Figure~\ref{fig:stride_loop} of the benchmark on stride, we find that as stride increases, the memory access performance is a significant reduction, this is due to the outstanding failure to bridge the memory access latency, which is the same as the conclusion we found in the random access test, As shown in the Table~\ref{tab:Throughput}. \begin{table}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-12pt} \caption{Throughput Benchmark of Random (LFSR)} \label{tab:ThroughputLFSR} \begin{tabular}{|c|c|c|c|c|} \hline \multirowcell{2}{Outsanding\\ Transactions}& \multirowcell{2}{Throughput\\ (GB/s)} & \multicolumn{3}{c|}{Utilization}\\ \cline{3-5} & & LUT& FF&BRAM\\ \hline 2 & 5.96 & 16.1\%& 11.6\%& 11.4\%\\ \hline 4 & 6.00 & 16.1\%& 11.7\%& 11.4\%\\ \hline 8 & 5.57 & 16.2\%& 11.8\%& 11.4\%\\ \hline 16 & 5.82 & 21.8\%& 23.7\%& 37.4\%\\ \hline 32 & 5.57 & 20.9\%& 23.5\%& 36.6\%\\ \hline 64 & 5.64 & 20.8\%& 23.5\%& 36.6\%\\ \hline \end{tabular} \end{table} \begin{table}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-12pt} \caption{Throughput Benchmark of Random Comparison} \label{tab:ThroughputRandom} \begin{tabular}{|c|c|c|c|c|} \hline \multirowcell{2}{Benchmark\\Model}& \multirowcell{2}{Throughput\\ (GB/s)} & \multicolumn{3}{c|}{Utilization}\\ \cline{3-5} & & FF&LUT &BRAM\\ \hline Sequential & 421.68 & 16.1\%&11.6\%&11.4\%\\ \hline random(LFSR)& 5.82 &16.2\%& 11.8\%& 11.4\%\\ \hline random(Point)& 0.994161 &16.5\%&11.5\%&11.4\%\\ \hline \end{tabular} \end{table} \section{Optimizing Memory Access Pattern for Applications} The performance of memory-bound application is highly sensitive to its underlying memory access pattern. \subsection{Machine Learning Inference} We present convolutional network computing efficiency as a fundamental benchmark for machine learning implementation, which performs a convolution calculation with a kernel of $11*11$ on a $1920*1080$ matrix in this benchmark. In the implementation, we utilize three types of implementation 1. CPU-based benchmark, 2. HBM-based single-kernel with the dual-channel benchmark (read and write on different memory channels) 3. HBM-based 32-channel with 16 kernels benchmark (each kernel contains one read and another write channels). As shown in the Table~\ref{tab:Convolution}, the runtime is that FPGA implementation can achieve a performance improvement of more than $100X$ the implementation for CPU, and the parallel performance of multi-channel is also $10X$ higher than the performance of a single kernel, and its resource consumption is only about $2X$. \subsection{Database (DB)} It is well known that evaluating a database query, consisting of a few database operators, is typically memory-bound on modern hardware like CPUs, so it is critical to understand and optimize the memory performance of each database operator so as to achieve high overall performance. Accordingly, the database community presents a few basic memory access patterns, upon which memory access costs of these database operators are modeled~\cite{manegold2002generic}. The performance characteristics of these basic patterns, together with the corresponding optimizations, are well analyzed on modern CPUs. However, these basic patterns are still not systematically analyzed on FPGAs. So we analyze four basic patterns include repetitive sequential traversal (\emph{rs\_stra}), repetitive random traversal (\emph{rr\_stra}) and random access (\emph{r\_acc}), and interleaved multi-cursor sequential access (\emph{nest}).\footnote{The other two basic patterns are subset of the above four patterns. } Table~\ref{tab:database} illustrates the throughput of each basic pattern, as well as the corresponding FPGA resource consumption. \noindent {\bf Optimizing \emph{rs\_stra}. } Larger unit size leads to higher memory throughput. Larger stride leads to lower memory throughput, while large unit size can amortize memory throughput loss. \noindent {\bf Optimizing \emph{rr\_stra}. } Larger unit size leads to higher memory throughput. \noindent {\bf Optimizing \emph{r\_acc}. } Larger unit size leads to higher memory throughput. \noindent {\bf Optimizing \emph{nest}. } Larger unit size and/or appropriate stride leads to high memory throughput. \begin{table}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-12pt} \caption{Database Benchmark} \label{tab:database} \begin{tabular}{|c|c|c|c|c|} \hline \multirowcell{2}{Access\\Patterns}& \multirowcell{2}{Throughput\\ (GB/s)} & \multicolumn{3}{c|}{Utilization}\\ \cline{3-5} & & LUT& FF&BRAM\\ \hline rs\_tra & 13.26 &9.0\% & 7.1\% & 10.8\%\\ \hline rr\_tra & 3.51&9.1\%& 7.1\% & 11.2\%\\ \hline r\_acc &0.68 & 9.1\%& 7.1\%&10.7\%\\ \hline nest & 421.89 &16.5\% & 11.9\% & 11.8\%\\ \hline \end{tabular} \end{table} \begin{table}[!htbp] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-12pt} \caption{Convolution Benchmark} \label{tab:Convolution} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirowcell{2}{Type}& \multirowcell{2}{Channel}& \multirowcell{2}{$BW$\\(GB/s)} & \multirowcell{2}{$T$\\(s)} & \multicolumn{3}{c|}{Utilization}\\ \cline{5-7} & & & &FF&LUT &BRAM\\ \hline CPU & - & 0.263 & 0.06 & - & - & - \\ \hline FPGA & 2 & 0.0080 & 2.04 & 9.1\% & 7.0\% & 11.3\%\\ \hline FPGA & 32 & 0.012 & 21.0 & 18.8\% & 14.8\% & 31.3\%\\ \hline \end{tabular} \end{table} \section{Conclusion} By benchmark the memory access bandwidth of the Vitis platform's FPGA platform implementation on the HLS, the HLS on the Vitis platform can fully present to the performance of HBM and other storage systems, especially for HBM. Compared to the direct use of Vivado , our architecture can make more comfortable reach the peak of theoretical performance. Furthermore, for HLS, we can also make a more detailed and actual performance benchmark on HBM from the realistic system's perspective. In this benchmark, we conducted a comprehensive examination on the HBM2 of all two stacks and its 32 pseudo channels, and obtained their memory access characteristics, and compared them with the benchmark of Shuihai. Accordingly, we propose an architecture for FPGA memory access performance and memory access system under the Vitis platform that is easy to test and expand. This architecture provides a comprehensive and detailed overview of the off-chip memory structure of FPGA, mainly HBM. The benchmark includes latency, bandwidth throughput, random and continuous performance, particular parameters, and some tests at the system (such as database) level, so that we can understand the feature of FPGA memory access, especially HBM2, under the Vitis platform and HLS implementation, And its advantages with CPU/GPU implementation. We will extend the benchmark implemented in this paper to different FPGA development boards and provide open source code for more benchmark. \section*{Acknowledgment} \section{design} The goal of this section is to use Vitis to benchmark memory in terms of throughput and latency. In particular, we intend to measure the performance of the HLS implementation with various optimization parameters in Vitis. Therefore, our design and implementation mainly have two aspects. The first one is the memory access pattern based on the standard for loop, which is mainly to test some standard memory accesses' highest performance and the performance impact of some standard parameters. The second one is the memory access mode based on Dataflow, closer to the RTL method. It uses inter-function parallelism to decompose the memory access behavior of the standard for loop into address generation, memory access, and data operation, such that their memory accesses are completely independent. To this end, we can use the memory access behavior independence and function parallelism to measure memory access latency, some particular address behaviors (such as random addresses), and memory access based on HLS in discontinuous memory access features and performance. In the following, we will first present a memory latency benchmarking engine that allows us to accurately measure memory latency with Vitis, followed by a memory throughout benchmarking engine that is used to measure memory throughput under different optimizations. Table~\ref{tab:param_label} illustrates all the symbols and their corresponding meanings used in this paper. \begin{table}[htbp] \centering \caption{SUMMARY OF RUNTIME PARAMETERS} \label{tab:param_label} \begin{tabular}{|c||p{180pt}|} \hline Parameter&Definition \\ \hline \hline $F$&Kernel frequency\\ \hline $BW$&Memory bandwidth \\ \hline $N$&Number of memory channels (normally $N=32$)\\ \hline $W$&Bit-width of a memory transaction\\ \hline $B$&Burst size\\ \hline $NO$&Number of outstanding memory transactions\\ \hline $T$& Run-time\\ \hline $T_l$&Latency of one memory transaction\\ \hline ${\tau}_{II}$&Iteration interval in a loop,or relative latency of memory transactions\\ \hline $n$&No. of memory channels, $0\leq n\geq31$\\ \hline $i$&An iteration of a loop\\ \hline $T_{s_i}$&Starting time of an memory transaction in an iteration of a loop \\ \hline $T_{e_i}$&Ending time of an memory transaction in an iteration of a loop \\ \hline $T_o$& Latency of an operation except memory transaction in a loop \\ \hline $I$ & Total number of transactions\\ \hline $FIFO$ & First in first out tunnel\\ \hline $H_n$ & NO.n of memory tunnels\\ \hline $G$ & Work group size of a memory channel\\ \hline \end{tabular} \end{table} \subsection{Memory Latency Benchmark} When design the memory latency benchmarking engine, we mainly address the following two challenges. First, the memory latency benchmarking engine is implemented with Vitis and Xilinx Runtime (XRT), which abstract away the implementing details of memory transaction such that we are not able to configure each memory transaction whose interface is AXI in a fine-grained manner like implemented with Verilog~\cite{}. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-15pt} \subfloat[Latency Benchmark Kernel Architecture \phantom{1} ]{% \includegraphics[width=3in]{figure_eps/architecture.eps}% \label{fig:latencys_archi}% } \subfloat[ Memory Access Unit \phantom{ 1} ]{% \includegraphics[width=3in]{figure_eps/Memory_Access.eps}% \label{fig:latencys1_access}% } \subfloat[ Latency Counter Unit \phantom{1} ]{% \includegraphics[width=2in]{figure_eps/Latency_Counter_Unit.eps}% \label{fig:latencys_counter}% } \caption{Hardware design of memory latency benchmark engine} \label{fig:latency_t_archi} \end{figure} Second, the framework built on Vitis and HLS is not suitable for direct use for simplicity and ease of use, compared to the Verilog-based memory benchmarking tool Shuhai~\cite{} that allows to directly send the latency numbers back to the host using the PCIe module. To address the above two challenges, we propose a memory benchmark architecture with HLS implementation. As shown in Figure~\ref{fig:latency_t_archi}, because HLS shields the timing details, we need to build a module that builds an independent accumulation loop structure. This module uses a loop accumulator and controls the loop in every clock cycle, accumulating and using it for sequence counting. For the memory access part of memory, as shown in Figure~\ref{fig:latencys1_access} and Algorithm~\ref{alg:memory access}, we have designed a module to perform a cyclic memory access. This cycle uses the principle of blocking loops to ensure that the module is blocked at the beginning of each memory access operation until the data returns so that we can obtain the clock delay of the memory accessing through each blocked memory access operation. However, to count each memory access operation's latency, as shown in Algorithm~\ref{alg:Latency count} and Figure~\ref{fig:latencys_counter}, we also need to parallel the memory access module with the loop accumulation module. We use the HLS Data Flow structure to construct these two modules and use a FIFO called memory data FIFO to link and control. Whenever the memory access module receives data, it writes the data into the memory data FIFO, and whenever the loop accumulator reads data from the FIFO, it clears the loop accumulator. Zero and transfer the accumulated value (as the latency period between two memory accesses) through another FIFO (latency data FIFO) to the last module. The last module writes data back, as shown in Algorithm~\ref{alg:write_back}. It will write each read from the latency data FIFO to another memory channel. Compared to Shuhai, the latency benchmark implemented on the Vitis module can only exchange data with the host through memory mapping. So in order to make HLS easy to use and complete data, we build a write-back module that writes the latency data read from the latency data FIFO back to another memory channel, ensuring data integrity while avoiding the latency data being disturbed by writing back to memory However, the HLS implementation involves two dependency memory channels (such as off-chip memory and another on-chip memory). The latency data obtained from one memory channel will be written back to another memory channel, due to the channel's memory access latency, which causes the cyclic accumulator to be blocked and generates incorrect latency statistics data. Therefore, to prevent this factor's influence, we need to ensure that the FIFO's length and the writing outstanding of the memory channel for writing latency data are far greater than the clock cycle of the memory access latency. In addition to the HLS-based implementation, we also implement an RTL-based memory latency benchmark engine. This implementation is used on the Vitis platform as a benchmark to compare with the HLS implementation. The RTL implementation is based on the hardware implementation of Shuiai's C1 part\cite{wang2020shuhai}, and in order to adapt to the Vitis platform, we also write back the Shuiai's return data to another memory channel to prevent the whole hardware design from being optimized away. \begin{algorithm}[htbp] \caption{Memory Accessing} \Input{The total transaction $I$, memory data channel ${H_n}_1$} \Output{memory data FIFO tunnel ${FIFO}_d$} \label{alg:memory access} \ForEach{$i \in [0,I-1]$}{ $Result\gets {H_n}_1[i]$;\\ ${{FIFO}_d}.Write\left(Result\right)$; } \end{algorithm} \begin{algorithm}[htbp] \caption{Latency count} \Input{The total transaction $I$,memory data FIFO tunnel ${FIFO}_d$ ,the latency counter $C$ } \Output{Latency data FIFO tunnel ${FIFO}_l$} \label{alg:Latency count} $C\gets 0$;\\ \While{$i<I$}{ \If{${FIFO}_d$ is not empty}{ $Result \gets {FIFO}_d.read()$ ${{FIFO}_l}.Write\left(C\right)$;\\ $C\gets 0$;\\ $i\gets i+1$; } \Else{ $C\gets C+1$; } } \end{algorithm} \begin{algorithm}[htbp] \caption{Latency data write back to memory} \Input{The total transaction $I$, \\ Latency data FIFO tunnel ${FIFO}_l$} \Output{memory data channel ${H_n}_2$} \label{alg:write_back} \ForEach{$i \in [0,I-1]$}{ $Latency\gets {FIFO}_l.read()$;\\ ${H_n}_2[i]\gets Latency$; } \end{algorithm} The latency is on the basis of the memory hardware structure. In the above, we define the latency benchmark as the clock cycle from the beginning of a memory operation to the end of the memory operation. \begin{equation} \begin{aligned} T_l = T_{s_{i+1}}-T_{e_i} \end{aligned} \label{equation:absolute latency} \end{equation} This memory access latency is absolute, as shown in Equation~\ref{equation:absolute latency}, and it comes from the physical implementation of memory and its controller. \subsection{Memory Bandwidth Benchmarking Engine} In order to realize the benchmark influenced by multiple parameters of HLS, we need to consider the behavior of the high-level language under different parameters and its actual implementation on FPGA after compilation. For the implementation of HLS, we need to consider one main memory access feature: whether to fetch memory continuously and boundedly. Continuously bounded memory access refers to whether there is a specific starting address during the memory access process. At the same time, after each round of memory access operation, its operations are independent, that is, the next round of memory access operations has nothing to do with the previous round. For the continuous and bounded memory access implementation, we use the standard for loop to implement it. We adopt the dataflow method for the realization of non-continuous and bounded memory access, which is similar to the latency test method. We separate the memory access address generation and the memory access behavior and use the FIFO to control the memory access behavior and memory access boundary, thus forming an independent memory access module, such as the kernel in Figure~\ref{fig:benchmark}. Using 32 kernels for memory access and host timing, we can obtain the actual bandwidth and performance of 32 channels under different parameters. There are two types of continuous memory accesses. One is the address continuous, and the other is the memory access request. Address continuity refers to an utterly continuous address generated during the memory access process without interruption. At this time, we can use the burst feature of AXI. In the case of an AXI handshake, we can directly read the burst size of memory. The continuous memory access request refers to continuously sending memory access requests to the memory controller, but the address is not necessarily continuous. At this time, there is a complete AXI handshake in each time memory accesses. Bounded memory access refers to the existence of an exact starting address during the memory access process, such as a standard for loop. Generally speaking, for HLS-based memory access implementations, continuous bounded memory access can achieve the highest performance. When an HLS implementation cannot achieve continuous bounded memory access, the actual impact on memory access performance is the outstanding parameter. All memory data accesses used at this time will carry out a complete AXI handshake protocol. For actual HLS, a memory access operation latency II in the loop is relative. We call it a relative latency. This latency can be under the influence of actual implementation. The definition of this latency is from The clock cycle from the starting of memory fetch to the starting of the next memory fetch. \begin{figure}[t] \captionsetup[subfigure]{justification=centering} \centering \subfloat[ \phantom{H} ]{% \includegraphics[width=3in]{figure_eps/timing-1.eps}% \label{fig:rellatency_1}% } \subfloat[ \phantom{H} ]{% \includegraphics[width=3in]{figure_eps/timing-2.eps}% \label{fig:rellatency_2}% } \subfloat[ \phantom{H} ]{% \includegraphics[width=3in]{figure_eps/timing-3.eps}% \label{fig:rellatency_3}% } \caption{Relative Latency} \label{fig:rellatency} \end{figure} \begin{equation} \begin{aligned} {\tau}_{II}=T_{s_{i+1}}-T_{s_i}=T_l+T_o \end{aligned} \label{equation:relative latency 1} \end{equation} The memory access latency and the absolute memory access latency, the relative access latency is composed of the total memory access latency, the number of memory access channels outstanding, and the operation latency after the memory access and the correlation of the data access (RAR, RAW, WAR, WAW). As shown in Figure~\ref{fig:rellatency_1} and Equation~\ref{equation:relative latency 1}, in this case, the next memory access behavior must be performed after all operations in the previous cycle are completed, and the relative access latency is the highest at this time. \begin{equation} \begin{aligned} {\tau}_{II}=T_{s_{i+1}}-T_{e_i}=T_l \end{aligned} \label{equation:relative latency 2} \end{equation} This access latency occurs in a completely no optimized FOR loop, or there is a correlation between the operation and the fetched data after the fetch (there is a correlation between cycles). As shown in Figure~\ref{fig:rellatency_2}, when there is no correlation between the memory access operations, the PIPELINE optimization feature is used, and the next memory access behavior can be executed before the last operation completed, that is, the next memory access behavior is completed in the previous memory access behavior Then, the relative latency is illustrated in Equation~\ref{equation:relative latency 2}. \begin{equation} \begin{aligned} {\tau}_{II}=MAX\left(1,\frac{N_O+T_l-N_O}{N_O}\right) \end{aligned} \label{equation:relative latency 3} \end{equation} fetched memory access data. The operation is only related to the current round of fetched data, but the fetched data is RAW or WAR. As shown in Figure~\ref{fig:rellatency_3}, when there is no correlation between the operation and the fetched data after the fetch, the pipeline optimization feature is used, and the next fetch behavior can be executed when the previous fetch behavior incomplete. It is related to the critical parameter of an HLS implementation of memory access is outstanding. When there is an outstanding memory access cache, the AXI protocol used when accessing the memory can cache part of the data before memory accessing returns. When the number of outstanding cache channels is $NO$, the relative latency at this time is Equation~\ref{equation:relative latency 3}. \begin{figure}[t] \centering \setlength{\abovecaptionskip}{+5pt} \setlength{\belowcaptionskip}{-15pt} \includegraphics[width = 3in]{figure_eps/arch.eps} \caption{Bandwidth Benchmark Architecture} \label{fig:benchmark} \end{figure} \section{Empirical Evaluation} \label{sec_experiment} \subsection{Experimental Setup} \label{subsec_experiment_stup} {\bf System Architecture. We deploy BiS-KM on the second generation of the Intel Xeon+FPGA platform~\cite{gupta2011harp}, consisting of an Arria 10 FPGA and a Broadwell 14-core E5 processor on the same socket (Figure \ref{system_architecture}). The FPGA has cache-coherent access to the CPU's main memory (64GB) through 1 QPI and 2 PCIe links, reaching an aggregated maximum throughput of 17GB/s. We use the open-source framework Centaur~\cite{centaur} for software-hardware integration. Centaur manages the data communication between the FPGA and the CPU. BiS-KM is instantiated within Centaur as a User-Defined-Function. \begin{figure}[t] \centering \includegraphics[width=3.3in]{figure/system_archi.png} \vspace{-2.5ex} \caption{System architecture of the target platform} \vspace{-3ex} \label{system_architecture} \end{figure} \noindent {\bf Hardware Configuration. The hardware implementation in our experiment consists of $DISP=32$ pipelines, each of which accommodates a sample. Each pipeline is equipped with 8 Distance Processors to support a maximum of 8 clusters and each Distance Processor contains a BiS-DP unit to process 16 bits from 16 features ($DIFP=16$) per cycle. The maximum number of dimensions supported is 1024. The clocking frequency is 200MHz. \noindent {\bf Workloads. }We run our experiments with four real-world data sets: OpenStreetMap~\cite{mapping}, Forest ~\cite{Forest}, Gas~\cite{gas} and Epileptic~\cite{epileptic}, as shown in Table~\ref{t_dataset}. The data sets cover a wide range of dimensions and are representative for clustering tasks. Because the original data set size of Gas, Epileptic and OpenStreetMap is small, we duplicate the original data (8 times, 8 times and 64 times respectively) in order to amortize the communication overhead between the CPU and the FPGA. Since the K-Means algorithm itself is sensitive to the initial centers, we use the same initial centers in all the experiments for each data set. \begin{table} [t] \def1.1{1.1} \centering \vspace{-0.5ex} \caption{Evaluated data sets. } \label{t_dataset} \vspace{-1.5ex} \begin{tabular}{|c|c|c|c|} \hline \textbf{Data sets}& \textbf{Features} & \textbf{Samples} & \textbf{Clusters} \\ \hline OpenStreetMap~\cite{mapping}& 28 & 674,944 & 6 \\ \hline Forest~\cite{Forest}& 54 & 581,012 & 7 \\ \hline Gas~\cite{gas}& 128 & 111,280 & 6\\ \hline Epileptic~\cite{epileptic}& 178 & 92,000 & 5\\ \hline \end{tabular} \vspace{-3ex} \end{table} \noindent {\bf Hardware Baseline. }To evaluate the effectiveness of our BiS-KM design, we choose the state-of-the-art flexible K-Means accelerator (Flex-KM)~\cite{flex-kmeans} as our baseline.\footnote{Actually, we re-implement Flex-KM on our Arria 10 FPGA according to the paper~\cite{flex-kmeans}. Our implementation can run at line rate (512 bits per cycle) with the same frequency of 200MHz.} \vspace{-1ex} \subsection{Hardware Efficiency: Throughput} \label{subsec_evaluation} \vspace{-0.5ex} In this subsection, we examine the hardware efficiency of the BiS-KM design in terms of throughput. The throughput is calculated by the data set size divided by the elapsed time required by an iteration. ``x-bit" means the BiS-KM design with an $x$-bit precision level, where $x$ varies from 1 to 32. \noindent\textbf{Effect of Dimensionality on Throughput. }We examine the effect of the number of dimensions on the achievable throughput. Under the BiS-KM memory layout, if the dimension of a data set is not a multiple of $DIFP$, we have to use zero padding to align it to $DIFP$, potentially wasting a certain amount of memory bandwidth due to the padding. Figure~\ref{Throughput} shows the throughput of BiS-KM on the four data sets for a varying number of dimensions. The throughput of BiS-KM varies only slightly with different dimensions. This is because the padding overhead is relatively small over the memory traffic between the FPGA and the host memory. The throughput of BiS-KM roughly reaches the theoretical memory bandwidth when the dimension of a data set is a multiple of $DIFP$=16. Take the data set Gas ($D$ = 128) as an example, BiS-KM can roughly saturate the FPGA's memory bandwidth, with its throughput close to the theoretical maximum bandwidth (512 bits at 200MHz is 12.8GB/s). However, there is still a small gap from the theoretical maximum bandwidth due to the fact that the computing pipelines are stalled in the global aggregation and division stages. \begin{figure}[tb] \centering \includegraphics[width=3.2in]{figure/throughput.png} \vspace{-1.5ex} \caption{Throughput of different data sets running with various precision levels using BiS-KM} \vspace{-3ex} \label{Throughput} \end{figure} \noindent {\bf Effect of Precision Level. }Figure~\ref{speed_up_over_32} depicts the runtime speedups of different low-precision levels of BiS-KM over the 32-bit precision Flex-KM for four data sets. We make three observations. \begin{figure}[t] \centering \includegraphics[width=3.2in]{figure/speedup_over_32_new.png} \vspace{-1.5ex} \caption{Speedup of runtime per iteration of various low-precision over 32-bit precision computation} \vspace{-3ex} \label{speed_up_over_32} \end{figure} \begin{figure}[t] \centering \vspace{-1.5ex} \subfloat[OpenStreetMap (28 features)]{\includegraphics[width=1.625in]{figure/map_mem_traffic.png} \label{mem_household}} \subfloat[Epileptic (178 features)]{\includegraphics[width=1.625in]{figure/epi_mem_traffic.png} \label{mem_covertype}} \vspace{-1.5ex} \caption{Memory traffic (bits) per sample as the precision varies} \label{mem_precision} \vspace{-4ex} \end{figure} First, BiS-KM achieves roughly linear speedup as the precision decreases, due to the linear reduction of memory traffic (Figure~\ref{mem_precision}). Thus, we conclude that the performance of BiS-KM is mainly bounded by the memory bandwidth between the CPU main memory and the FPGA. Second, the slightly sub-linear speedup observed at 4-bit precision level (Figure~\ref{speed_up_over_32}), is due to strided memory access, particularly when accessing the most significant four bits of every 32 bits. The DRAM's row buffer hit rate is about 4/32=12.5\%, affecting the achievable memory throughput.\footnote{The problem becomes worse at lower precision, e.g., a 2-bit precision, since the row buffer hit rate becomes even lower. Therefore, below 4-bits precision, the gains in hardware efficiency cannot amortize the losses in statistical efficiency. } Third, the actual throughput roughly stays the same with varying precision levels, as depicted in Figure~\ref{Throughput}, demonstrating that BiS-KM allows us to take full advantage of low precision. We conclude that BiS-KM is able to efficiently support any-precision clustering on the FPGA. \vspace{-1.5ex} \subsection{Statistical Efficiency: Loss vs. Iterations } \label{subsection_connection} \vspace{-0.5ex} \begin{figure}[t] \centering \subfloat[OpenStreetMap (28 features)]{\includegraphics[width=1.625in]{figure/mappingloss_epoch.png} \label{}} \subfloat[Forest (54 features)]{\includegraphics[width=1.625in]{figure/covtypeloss_epoch.png} \label{}} \hfill \subfloat[Gas (128 features)]{\includegraphics[width=1.625in]{figure/gasloss_epoch.png} \label{}} \subfloat[Epileptic (178 features)]{\includegraphics[width=1.625in]{figure/epiloss_epoch.png} \label{loss_iteration_epi}} \vspace{-1.5ex} \caption{Convergence comparison: training loss vs. iterations under various precision levels. In (d), the curve of 6-bit precision is out of the range of y-axis.} \label{loss_iteration} \end{figure} We now examine the statistical efficiency of BiS-KM with different precision levels, in terms of loss (i.e., within-cluster sum of square error) vs. iterations (Figure~\ref{loss_iteration}). We use the 32-bit precision Flex-KM as our baseline. We make four observations. First, low precision levels do converge to the same loss as 32-bit precision. Figure~\ref{loss_iteration} illustrates that a 12-bit precision level is adequate to converge to the same loss as the 32-bit precision does, demonstrating the great advantage of leveraging low precision. Second, a different data set can require a different minimum precision level to converge. Figure~\ref{loss_iteration} illustrates that the minimum precision level required by the OpenStreetMap, the Forest, the Gas and the Epileptic to converge to the same loss as 32-bit precision are 8 bits, 6 bits, 8 bits and 12 bits, respectively. This observation motivates our BiS-KM design allowing any-precision clustering with only one hardware implementation. Third, a low precision level is able to successfully enter a smaller local minimum as the 32-bit precision does. Figure~\ref{loss_iteration} (b) illustrates that the BiS-KM design with a low-precision level is capable of following the transfer from a local minimum to a smaller local minimum for the data set Forest, indicating that the statistical efficiency can be preserved when using low precision data. Fourth, BiS-KM typically requires a similar number of iterations to converge to the same loss compared with the 32-bit precision Flex-KM. Figure~\ref{loss_iteration} shows that BiS-KM requires roughly the same number of iterations to converge as Flex-KM does for the data sets OpenStreetMap, Gas and Epiletptic. We conclude that the low-precision clustering enabled by BiS-KM can preserve the statistical efficiency. \subsection{End-to-End Comparison: Loss vs. Time} \begin{figure}[t] \centering \subfloat[OpenStreetMap (28 features)]{\includegraphics[width=1.625in]{figure/mappingloss_runtime.png} \label{runtime_household}} \subfloat[Forest (54 features)]{\includegraphics[width=1.625in]{figure/covtypeloss_runtime.png} \label{runtime_covertype}} \hfill \subfloat[Gas (128 features)]{\includegraphics[width=1.625in]{figure/gasloss_runtime.png} \label{runtime_gas}} \subfloat[Epileptic (178 features)]{\includegraphics[width=1.625in]{figure/epiloss_runtime.png} \label{runtime_epi}} \vspace{-1.5ex} \caption{End-to-end comparison: training loss vs. runtime under various precision levels. In (d), the curve for 6-bit precision is out of the range of y-axis.} \label{loss_runtime} \end{figure} In this subsection, we validate that BiS-KM with the low-precision dataset outperforms the 32-bit precision Flex-KM, in terms of end-to-end convergence rate. Figure ~\ref{loss_runtime} shows the convergence trends, loss vs. runtime, with various precision levels for four data sets. We observe that low precision leads to a significantly faster convergence rate. For the data sets OpenStreetMap, Forest and Gas, BiS-KM can achieve about 4X speedup to reach the same loss as the 32-bit precision Flex-KM does. However, BiS-KM can only achieve roughly 2.5X speedup for the data set Epileptic, which requires a 12-bit precision to converge to the same training loss as Flex-KM does. \subsection{Comparison with CPU Implementations} \noindent{\bf CPU Baselines. }We choose a highly optimized multi-core AVX2-enhanced CPU implementation as our software baseline~\cite{bohm2017multi}. The software baseline is originally implemented with AVX2 64-bit double-precision instructions, labelled as ``CPU:64-bit double''. Actually, we try to achieve more data parallelism using two smaller vector types: \emph{vector float} and \emph{vector short}.\footnote{ Multiplication-related AVX2 instruction does not support \emph{vector char} type. Even when the dataset is in 8-bit precision, we cannot achieve more parallelism, since we have to pad to a 16-bit boundary for further computation. } Accordingly, we produce two more CPU baselines: ``CPU:32-bit float'' and ``CPU:16-bit fixed point'', to improve the performance of K-Means on CPUs. \noindent{\bf Comparison Methodology. }Since all the K-Means implementations on CPUs have roughly the same statistical efficiency as BiS-KM running at a reasonable precision level, the hardware efficiency comparison is the main metric showing the efficiency of BiS-KM. \noindent{\bf Comparison of Hardware Efficiency. }Figure ~\ref{cpu_comparison} illustrates the runtime-per-iteration comparison between the three software implementations and BiS-KM with the lowest precision level that leads to the same loss as 32-bit precision does, for the Gas and the Epileptic data set. The CPU implementation with a smaller vector datatype leads to higher performance, since a smaller vector datatype yields more data-level parallelism using SIMD and induces less memory traffic. BiS-KM is faster than ``CPU:64-bit double'' and ``CPU:32-bit float'', even though the 14-core CPU has 60GB/s memory bandwidth while our FPGA has only roughly 15GB/s. This is because BiS-KM takes advantage of low precision, e.g., using 6-bit precision. BiS-KM has roughly the same performance as ``CPU:16-bit fixed point'' with 6 (or 8) cores, since the K-Means algorithm is able to take full advantage of task-level (e.g., multi-core) and data-level (e.g., 32-way SIMD) parallelism on the CPU. Note, if we implement BiS-KM on a larger FPGA, e.g., VCU118, which has more FPGA resources and higher memory bandwidth, BiS-KM's performance would improve. Nevertheless, the fact that the FPGA can compete with 14 cores demonstrates the feasibility and advantages of the proposed approach even in its current configuration. \begin{figure}[t] \centering \subfloat[Gas]{\includegraphics[width=1.625in]{figure/gas_sw_cmp.png} \label{}} \subfloat[Epileptic]{\includegraphics[width=1.625in]{figure/epi_sw_cmp.png} \label{}} \vspace{-2ex} \caption{Runtime comparison between three CPU implementations with increasing number of cores and BiS-KM with the lowest precision level that is able to converge. } \label{cpu_comparison} \vspace{-3ex} \end{figure} \subsection{Resource Consumption Breakdown} Table ~\ref{tab:resources} shows the resource consumption breakdown of four modules in the BiS-KM hardware design. ALMs and BRAMs (i.e., ``M20Ks'' in the Table) are mostly used the cluster assignment and accumulation modules, while the DSP utilization is low since it is mainly used to calculate the squared $L^2$ norm in the center pre-processing module. Table ~\ref{tab:resources} also shows the resource consumption of the major components, e.g., Accu and Agg. We observe that each component requires a very small amount of FPGA resources. For example, each \emph{Dist} consumes about 0.1\% ALMs, allowing us to instantiate a massive amount of \emph{Dists} to process multiple cluster centers concurrently on the FPGA. \begin{table}[h] \caption{Resource consumption breakdown of the BiS-KM hardware design with $DIFP$=16 and $\#pipe$=32 } \vspace{-3ex} \begin{small} \begin{center} \def1.1{1.1} \begin{tabular}{l|c|c |c} \textbf{Resources} & \textbf{ALMs} & \textbf{M20Ks} & \textbf{DSPs}\\ \hline Center Norm & 786 (0.18\%) & 0 (0\%) & 48 (3.16\%)\\ Dist & 452 (0.11\%) & 0 (0\%) & 0 (0\%) \\ Accu & 1,789 (0.42\%) & 29 (3.75\%) & 0 (0\%)\\ Agg & 219 (0.05\%) & 3 (0.70\%) & 1 (0.07\%)\\ Div & 846 (0.20\%) & 1 (0.03\%) & 0 (0\%)\\ \hline \hline \hline Center pre-processing & 1,357 (0.32\%) & 26 (0.78\%) & 49 (3.22\%) \\ Cluster assignment & 115,522 (27.10\%)& 208 (6.21\%) & 0 (0\%) \\ Accumulation & 57,466 (13.45\%) & 931 (27.79\%) & 1 (0.07\%) \\ Division & 1,674 (0.39\%) & 14 (0.42\%) & 0 (0\%) \\ \hline BiS-KM & 176,019 (41.26\%) & 1,179 (35.19\%) & 50 (3.29\%) \\ \vspace{-3ex} \end{tabular} \end{center} \end{small} \label{tab:resources} \end{table} \section{Introduction} With the development of computer architecture, there is a considerable gap between the higher performance of the computing units and the slower speed of DRAM memory systems. With the development of various applications, such as Neural Network training that require large-scale data exchange, several research institutes such as Samsung and Micron have presented next-generation high-performance memory architectures, like Hybrid Memory Cube (HMC)~\cite{jeddeloh2012hybrid} and High Bandwidth Memory (HBM)~\cite{jun2017hbm}. In this paper, we intend to optimize memory performance on FPGAs~\cite{XilinxU280:2020} with High-Level Synthesis(HLS)~\cite{XilinxHLS:2020}. $\bullet$ The provided max bandwidth implementation by HLS in HBM is up to 431GB/s~\cite{XilinxHBM:2020}, which achieves the identical performance with the implementation by Register-Transfer Level (RTL)~\cite{XilinxRTL:2009}, such as Verilog or System Verilog. $\bullet$For the latency of HBM, the HLS implementation based on the Vitis platform~\cite{XilinxVitis:2020} and the RTL implementation have the same latency for HBM data access. Compared with the memory access delay of DDR4~\cite{o2017fine,Micron:2015,mi2010software}, the HBM access latency is more extensive. The increasing latency is because the connection between the on-chip IO and HBM memory structure is a crossbar controller~\cite{XilinxHBMFPGA:2019}. Compared with the DDR controller, its structure causes the average latency to increase. Compared with the implementation of the Vivado platform, the latency based on the Vitis platform has an absolute increase. The reason is that Vitis will encapsulate the kernel, whether implemented by HLS or RTL, which leads to an increase in latency. $\bullet$The Vitis-based HLS implementation of HBM access is equivalent to the implementation of CPU/GPU access to HBM. because of implemented on FPGA structure, it close to the HBM connection without multiple cache and control structure interference, and HLS implementation The HBM memory access parameters are the same as the direct access to the AXI port of HBM. Therefore, using the characteristics of the HLS and Vitis platforms can be similar to the CPU/GPU architecture~\cite{XilinxUltraFast:2020}. By benchmarking many fundamental parameters, we obtain the performance of HBM under different implementations, which provides a benchmark for various future applications with various access modes on the HBM platform. $\bullet$The Address Mapping Policy is Critical to High Bandwidth. Different address mapping policies lead to an order of magnitude throughput differences when running a typical memory access pattern (i.e., sequential traversal) on HBM, indicating the importance of matching the address mapping policy to a particular application. With the development of FPGA applications, numerous high-concurrency and high-performance applications implemented on FPGAs, such as AI, HPC, graph computing, have enormous demands such as ease to use and data storage access performance on FPGA. With the emergence of a generation of high-performance storage structures, such as HBM and HMC, how to efficiently use FPGAs to handle these high-performance storage has become an important topic. With Xilinx launching multiple platforms U280, U250, and Vitis based on Vitis U200, U50, and Xilinx Run Time (XRT)~\cite{xrt:2019} based on HLS and OpenCL further make it possible to implement high-performance hardware-based on High-level programming language. In order to make better use of High-level programming language to bring out the performance of the next-generation memory, the paper will be based on the U280 platform, using the HLS implementation based on the XRT and Vitis platforms, and fully implement the FPGA's HBM memory access performance and features in the XRT environment. A wide range of benchmark compared with the primary memory access benchmark based on the RTL implementation on the Vitis platform and the RTL implementation based on the standard Vivado platform and the DDR memory access benchmark. Through these large-scale and comprehensive benchmark, we will have a complete understanding of the HBM memory access performance and characteristics of FPGAs based on High-level programming language, and through the ease of use achieved by HLS, the benchmark architecture can also provide a universal benchmark platform. \section{System Overview} \section{Related Work} To our knowledge, BiS-KM is the first novel solution that incorporates algorithm, software and hardware designs to enable any-precision K-Means. We contrast closely related work with BiS-KM on 1) FPGA-accelerated K-Means, 2) fast bulk bit-wise operations and 3) low-precision DNN and ML. \noindent\textbf{FPGA-Accelerated K-Means. }There is a wide range of research on accelerating the K-Means with the FPGA for various applications. However, most of the existing approaches focus on high-precision input data ~\cite{saegusa2006fpga, wang2007k, gokhale2003kmean, estlick2001, hussain2012kmeans, wang2016melia, flex-kmeans, km_hpca16, triangle_inequility_kmeans, opencl_kmeans, choi2014mapreducekmeans, bioinformatics_kmeans}. Among these, there is very few work that has considered the low-precision K-Means. Estlick et al.~\cite{estlick2001} run the K-Means algorithm on the CPU over the truncated datasets, whose $B$ least significant bits are truncated, where $B$ is 4, 6, or 8. In contrast, BiS-KM enables any-precision K-Means clustering using a single FPGA design. \noindent\textbf{Fast Bulk Bit-wise Operations. }A broad range of applications, such as database scans~\cite{bit_weaving, byteslice, ambit, bitwise_agg, hebe_icde18, vectorize_scan} and low-precision machine learning and neural networks~\cite{ml_weaving, finn_bnn, bismo} use fast bulk bit-wise operations to improve their performance. Closest to BiS-KM is the work by Wang et al.~\cite{ml_weaving} that proposes a customized MLWeaving memory layout to facilitate the hardware design of low-precision generalized linear model training. \noindent\textbf{Low-Precision DNN and ML. }Hardware acceleration of deep neural networks~\cite{fpga_cnn_li_jing, fpga_cnn_yun_liang, cnn_jason_cong, fpga_cnn_luo_guojie, fpga_cnn_tshinghua} and machine learning algorithms~\cite{dt,Kara2018ColumnMLCM,doppopdb2} has been a common topic for many years. Recently, researchers focus shifts to use low-precision hardware to further accelerate these workloads because the statistical efficiency of these algorithms can be well preserved in low precision. Plenty of low-precision designs~\cite{sgd-kaan-fccm, zipml, finn_bnn, vibnn_asplos18} focus on using a fixed quantization of data and a fixed-bitwidth accelerator to accelerate DNN and ML workloads, while other research work ~\cite{Stripes, bit_fusion_isca18, bismo} focuses on exploiting the bit-level precision variability of hardware arithmetic for interference. In contrast, BiS-KM focuses on any-precision K-Means clustering.
proofpile-arXiv_059-15762
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{\@startsection {section}{1}{\z@}% {-10pt \@plus -1ex \@minus -.2ex}{.5ex }{\normalfont\Large\bfseries\sectionfont}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {10pt\@plus 1ex \@minus.2ex}{-0.5ex \@plus.2ex}{\normalfont\large\bfseries\subsectionfont}} \def\frontmatter@title@format{\titlefont\centering}% \def\frontmatter@title@below{\addvspace{-5pt}}% \def\dropcap#1{\setbox1=\hbox{\dropcapfont\uppercase{#1}\hskip1pt} \hangindent=\wd1 \hangafter-2 \noindent\llap{\vbox to0pt{\vskip-7pt\copy1\vss}}} \renewenvironment{thebibliography}[1]{% \bib@heading% \ifx\bibpreamble\relax\else\ifx\bibpreamble\@empty\else \noindent\bibpreamble\par\nobreak \fi\fi \list{\@biblabel{\@arabic\c@enumiv}}% {\settowidth\labelwidth{\@biblabel{#1}}% \leftmargin\labelwidth \advance\leftmargin\labelsep \@openbib@code \usecounter{enumiv}% \let\p@enumiv\@empty \renewcommand*\theenumiv{\@arabic\c@enumiv} }% \sloppy\clubpenalty4000\widowpenalty4000% \sfcode`\.=\@m} {\def\@noitemerr {\@latex@warning{Empty `thebibliography' environment}}% \endlist} \newcommand*\bib@heading{% \section{\refname \fontsize{8}{10}\selectfont } \newcommand*\@openbib@code{% \advance\leftmargin\bibindent \itemindent -\bibindent \listparindent \itemindent \parsep \z@ }% \newdimen\bibindent \bibindent=0.0em \makeatother \newcommand{Energy Technologies Area, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA}{Energy Technologies Area, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA} \newcommand{Department of Materials Science \& Engineering, Northwestern University, Evanston, IL 60208, USA}{Department of Materials Science \& Engineering, Northwestern University, Evanston, IL 60208, USA} \newcommand{Department of Applied Physics, Yale University, New Haven, CT 06511, USA}{Department of Applied Physics, Yale University, New Haven, CT 06511, USA} \newcommand{Energy Sciences Institute, Yale University, West Haven, CT 06516, USA}{Energy Sciences Institute, Yale University, West Haven, CT 06516, USA} \begin{document} \title{Optimal Band Structure for Thermoelectrics with Realistic Scattering and Bands} \author{Junsoo Park} \email{qkwnstn@gmail.com} \affiliation{Energy Technologies Area, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA} \author{Yi Xia} \affiliation{Department of Materials Science \& Engineering, Northwestern University, Evanston, IL 60208, USA} \author{Vidvuds Ozoli\c{n}\v{s}} \affiliation{Department of Applied Physics, Yale University, New Haven, CT 06511, USA} \affiliation{Energy Sciences Institute, Yale University, West Haven, CT 06516, USA} \author{Anubhav Jain} \email{ajain@lbl.gov} \affiliation{Energy Technologies Area, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA} \date{\today} \begin{abstract} Understanding how to optimize electronic band structures for thermoelectrics is a topic of long-standing interest in the community. Prior models have been limited to simplified bands and/or scattering models. In this study, we apply more rigorous scattering treatments to more realistic model band structures - upward-parabolic bands that inflect to an inverted parabolic behavior - including cases of multiple bands. In contrast to common descriptors (e.g., quality factor and complexity factor), the degree to which multiple pockets improve thermoelectric performance is bounded by interband scattering and the relative shapes of the bands. We establish that extremely anisotropic `flat-and-dispersive' bands, although best-performing in theory, may not represent a promising design strategy in practice. Critically, we determine optimum bandwidth, dependent on temperature and lattice thermal conductivity, from perfect transport cutoffs that can in theory significantly boost $zT$ beyond the values attainable through intrinsic band structures alone. Our analysis should be widely useful as the thermoelectric research community eyes $zT>3$. \end{abstract} \maketitle \section{Introduction} Thermoelectricity enables clean electricity generation and fluid-free cooling. The ultimate goal of basic thermoelectric materials research is to design or discover materials with high figure of merit $zT$, commonly expressed as \begin{equation}\label{eq:zt1} zT=\frac{\alpha^{2}\sigma}{\kappa_{e}+\kappa_{\text{lat}}}T. \end{equation} Here, the thermoelectric power factor (PF) is the product of Ohmic charge conductivity ($\sigma$) and the Seebeck coefficient ($\alpha$) squared. The total thermal conductivity $\kappa$ is the sum of electronic thermal conductivity ($\kappa_{e}$) and lattice thermal conductivity ($\kappa_{\text{lat}}$). A major challenge in achieving high $zT$ and PF is that the electronic transport quantities are linked by a set of anti-complementary correlations: \cite{complex,newandold,intuition,perspectivesonthermoelectrics,thermoelectricmaterials,compromisesynergy,advancesinthermoelectrics,advancesinthermoelectricmaterials,onthetuning,computationalthermoelectrics,computationalenergymaterials} $\sigma$ and $\kappa_{e}$ are positively correlated whereas $\sigma$ and $\alpha$ are negatively correlated. Only $\kappa_{\text{lat}}$, a lattice property, is relatively independent, though it too exhibits some positive correlation with $\sigma$ through structural symmetry. These interrelations make it difficult to determine the effect of various design strategies to optimize $zT$. Equations based on the single parabolic band (SPB) model often underpin intuition about thermoelectric behavior. However, they tacitly assume that there is always enough (infinite) dispersion in all directions to cover the entire energy range relevant to thermoelectric phenomena. Instead, in most cases of practical interest, a band's dispersion changes in curvature (e.g., from positive to negative), crosses the Brillouin zone (BZ) boundary orthogonally, and tops out at some maximum energy. In addition to band shape considerations, thermoelectric properties can widely vary depending on what is assumed of the scattering behavior. Typical models and descriptors assume a behavior that is dominated by intraband/intravalley, elastic acoustic phonon scattering, and can be derailed when other scattering mechanisms and interband/intervalley transitions have large effects \cite{wangsnyderbook,valleytronics1,valleytronics2,roleofscattering,bandalignmentscattering}. Several studies have analytically investigated thermoelectricity using model band structures and scattering \cite{roleofscattering,bandalignmentscattering,simplescattering,optimalbandwidth,bestbandstructure}, but they had one or more of the following limitations: 1) the bands were purely parabolic or parabolic-like with infinite dispersion; 2) only a single isotropic band was considered; 3) models for scattering and/or transport were based on constant lifetimes, constant mean free paths, or at best scattering proportional to the density of states (DOS). \begin{figure*}[tp] \centering \includegraphics[width=1 \linewidth]{bandevolution} \caption{The evolution of band structure models used in this study. \textbf{a)} A single band changes in effective mass. \textbf{b)} One band (orange) changes in effective mass while another band (blue) is fixed. \textbf{c)} The valence band (orange) changes in effective mass while the conduction band (blue) is fixed. Note that two-dimensional (2D) band structures are shown for graphical purposes. In the study where three-dimensional (3D) bands are used, two types of anisotropic evolution are considered: one where a band grows heavy in one direction and another where the band grows heavy in two directions. Each band is an upward paraboloid smoothly inflecting to an inverted paraboloid halfway to the BZ boundary.} \label{fig:bandevolution} \end{figure*} To more generally addresses the topic of optimal band structure, we create more realistic model solid-state band structures and more faithfully model carrier scattering due to multiple sources. Our band structures are properly confined to a finite BZ with smooth inversion of upward (downward) parabolicity to downward (upward) parabolicity for describing conduction (valence) states - a key for retaining generality, physicality, and approximate compatibility with established scattering formalism. We modify established formulae for various scattering mechanisms - deformation-potential scattering (DPS), polar-optical scattering (POS), and ionized-impurity scattering (IIS) - as to capture the effects of inverted parabolicity, anisotropy, and band multiplicity on carrier lifetimes. Refer to Methods for further details. We monitor how thermoelectric properties of one or more bands respond to variations in band shapes (see Fig. \ref{fig:bandevolution}). Our study fine-tunes conclusions drawn from simpler models on design strategies such as anisotropy, band multiplicity, and resonance levels. Finally, we determine the optimum bandwidths as a function of temperature and $\kappa_{\text{lat}}$, which improves $zT$ beyond what is normally accessible. We start by rewriting Eq. \ref{eq:zt1} to better reflect fundamental transport relations: \begin{equation}\label{eq:zt2} zT=\frac{(\zeta^{2}/\sigma)}{\kappa_{e}+\kappa_{\text{lat}}}T. \end{equation} In Eq. \ref{eq:zt2}, the key role is played by $\zeta$, a quantity for which there appears to be no conventional name. We refer to it as the `thermoelectric conductivity'; in the Onsager-Callen formulation of coupled charge-and-heat conduction \cite{onsager1,onsager2,callen}, $\zeta$ is the quantity responsible for the thermal-gradient-to-charge-current conversion ($\mathbf{J}_{c}=\sigma\mathbf{E}-\zeta\nabla T$). That is, $\zeta$ represents the charge conductivity due to thermal driving force, the essence of thermoelectricity. Eq. \ref{eq:zt2} lifts the hidden coupling between $\alpha$ and $\sigma$ ($\alpha=\zeta/\sigma$) and correctly identifies $\zeta$ as the quantity that must be high but that $\sigma$ must be \textit{low}. That is, we desire high thermoelectric conductivity, not Ohmic conductivity - a correction to the routine but ambiguous thermoelectric adage that `electrical conductivity' must be high. \begin{figure*}[tp] \centering \includegraphics[width=1 \linewidth]{3dsingleopt} \caption{Single-band thermoelectric properties in the light direction ($x$) with $m_{x}=0.05$ with respect its effective mass profile. \textbf{a)} Fermi level and carrier concentrations at optimum $zT$, \textbf{b)} optimum $zT$, \textbf{c)} the power factor, \textbf{d)} and the Seebeck coefficient, in the $x$-direction. Each zone (as enclosed by vertical gray lines) indicates certain characteristic evolution: isotropic increase in $m$ from 0.05 to 500 in Zone 1, anisotropic increase in $m_{y}$ from 0.05 to 500 in Zone 2, anisotropic increase in both $m_{y}$ and $m_{z}$ in Zone 3 from 0.05 to 500. Four different scattering regimes are considered: the POS limit (\color{blue}blue\color{black}), the IIS limit (\color{green}green\color{black}), the DPS limit (\color{red}red\color{black}), and the overall effect (black). Supplementary results for $\sigma, \mu, \zeta, \kappa_{e}, L, z_{e}T$ are in Supplementary Fig. 8.} \label{fig:3dsingleopt} \end{figure*} Insights into maximizing $zT$ are attained by examining Eq. \ref{eq:zt2} through Boltzmann transport formalism \cite{dovertheoretical,ziman,btecoefficients,boltztrap}, \begin{equation}\label{eq:sigma} \sigma=\frac{1}{V} \int \Sigma(E)\left(-\frac{\partial f}{\partial E}\right)dE, \end{equation} \begin{equation}\label{eq:zeta} \zeta=\frac{1}{V T} \int (E_{\text{F}}-E)\Sigma(E)\left(-\frac{\partial f}{\partial E}\right)dE, \end{equation} \begin{equation}\label{eq:kappae} \kappa_{e}=\frac{1}{V T} \int (E_{\text{F}}-E)^{2}\Sigma(E)\left(-\frac{\partial f}{\partial E}\right)dE-\frac{\zeta^{2}}{\sigma} T, \end{equation} where $V$ is the cell volume, $E_{\text{F}}$ is the Fermi level, $f(E)$ is the Fermi-Dirac distribution, and $\Sigma(E)=v^{2}(E)\tau(E)D(E)$ is the spectral conductivity, composed of group velocity ($v$), lifetime ($\tau$), and DOS ($D$). The three integrands share in common the term $\Sigma(E)\left(-\frac{\partial f}{\partial E}\right)$, the source of the positive correlations between $\sigma$, $\zeta$, and $\kappa_{e}$. The integrands differ only in the power relation $(E_{\text{F}}-E)^{p}$ as $p=0, 1, 2$. This juxtaposition states that, in relative terms, low energy carriers contribute most to $\sigma$, high-energy carriers contribute most to $\kappa_{e}$, while it is the medium-energy carriers that are most responsible for $\zeta$. That is to say, if one wished to increase $\zeta$ \textit{relative to} $\sigma$ and $\kappa_{e}$, then $\Sigma(E)$ should be high in some medium-energy range and low elsewhere. The results that follow are interpreted with this picture in mind. \section{Results} \begin{figure*}[tp] \centering \includegraphics[width=1 \linewidth]{3ddoubleopt} \caption{Two-band thermoelectric properties in the light direction ($x$) with respect to the evolution the second band's effective mass profile while the first band is fixed at $m_{x}=m_{y}=m_{z}=0.05$ with $s_{\text{int}}=0.5$. \textbf{a)} Fermi level and carrier concentrations for optimum $zT$, \textbf{b)} optimum $zT$, \textbf{c)} the power factor, \textbf{d)} and the Seebeck coefficient, in the $x$-direction. Each zone (as enclosed by vertical gray lines) indicates certain characteristic evolution of the second band: isotropic increase in $m$ from 0.05 to 500 in Zone 1, anisotropic increase in $m_{y}$ from 0.05 to 500 in Zone 2, anisotropic increase in both $m_{y}$ and $m_{z}$ in Zone 3 from 0.05 to 500. Four different scattering regimes are considered: the POS limit (\color{blue}blue\color{black}), the IIS limit (\color{green}green\color{black}), the DPS limit (\color{red}red\color{black}), and the overall effect (black). Supplementary results $\sigma, \mu, \zeta, \kappa_{e}, L, z_{e}T$ are in Supplementary Fig. 10.} \label{fig:3ddoubleopt} \end{figure*} \subsection{Optimal Performance - Single Band} Here we investigate how a single band may yield the highest $zT$ with $E_{\text{F}}$ optimized for it. The performance is evaluated for different band structure shapes as depicted in the three `zones' of Fig. \ref{fig:3dsingleopt}: 1) isotropic increase in $m$, 2) anisotropic increase in $m$ in one direction (`unidirectional anisotropy'), and 3) anisotropic increase in $m$ in two directions (`bidirectional anisotropy'). The Seebeck coefficient predicted at a fixed $E_{\text{F}}$ is provided in Supplementary Discussion, which pinpoints how and why our model predictions deviate from the SPB model. The fluctuations in optimal $E_{\text{F}}$, displayed in Fig. \ref{fig:3dsingleopt}a, is also analyzed there. We first consider the case where $m$ varies isotropically (Zone 1 in Figs. \ref{fig:3dsingleopt}b--d). As expected, a light band is definitely preferred: the PF and $zT$ both decrease with increase in $m$, as numerous studies agree upon \cite{loweffmass,materialdescriptors,complexity}. A lighter band has higher mobility ($\mu$) and thus is less needy of carrier concentration ($n$) in providing a given value of $\sigma$ ($\sigma=n\mu$), which helps retain high $\alpha$. We observe that anisotropy is immensely beneficial (see Zones 2 and 3 in Fig. \ref{fig:3dsingleopt}). Because DPS is almost exactly proportional to DOS, the performance under DPS is a clear indicator of the important role played by the energy-dependence of group velocity, $\langle v^{2}(E) \rangle$, which steepens with band anisotropy to enhance performance. See Supplementary Fig. 6 for the schematic. Steepening $\langle v^{2}(E) \rangle$ increases $\zeta$ over $\sigma$, simultaneously lowering optimal $E_{\text{F}}$. We make three major observations. First, in terms of $zT$, bidirectional anisotropy (one light, two heavy directions) outperforms unidirectional anisotropy (two light, one heavy direction). This is because $\langle v^{2}(E) \rangle$ in the former evolves to a one-dimensional-like profile, which is steeper than the two-dimensional-like profile that $\langle v^{2}(E) \rangle$ evolves to in the latter. Second, toward the extreme limit, both types of anisotropy plateau in performance. This occurs for two reasons: for one, $\langle v^{2}(E) \rangle$ converges to the respective low-dimensional linear limits, and for two, extreme anisotropy exhausts `low-energy voids'. Refer to Supplementary Discussion for details. Third, because IIS and POS are less dependent on $D(E)$ than DPS, anisotropy is even more beneficial when they are the dominant mechanisms. Eventually though, because DPS increases most rapidly with DOS, it becomes dominant as anisotropy grows large. Overall, we observe that anisotropy improves $zT$ by as much as a factor of 3 above the isotropic value. Values in Fig. \ref{fig:3dsingleopt} would be lower if $m_{x}$ were larger and $\kappa_{\text{lat}}$ were higher. In Supplementary Fig. 7 , we show under $m_{x}=0.1$ and $\kappa_{\text{lat}}=1$ W m$^{-1}$ K$^{-1}$, $zT$ is limited to 5 rather than 9, in a closer neighborhood of the state-of-the-art, but draw the same relative benefit from anisotropy. \subsection{Optimal Performance - Multiple Bands} Realistic band structures often feature multiple bands near the Fermi level. One of the best designs known for increasing $\sigma$ without paying a penalty on $\alpha$ is multiplicity of band pockets aligned in energy \cite{halfheuslerbanddegeneracy,zintlorbitalengineering,chalcopyritebandconvergence1,bandconvergencereview,pbtebandconvergence}. Band multiplicity comes in various forms, however; we therefore examine the effects of (i) multiplicity of identical bands, (ii) coexistence of inequivalent bands (with varying the second band shape), and (iii) bipolar transport in the presence of valence and conduction bands (with varying valence band shapes). These band structures are illustrated in Fig. \ref{fig:bandevolution}b-c. As justified in the Methods section, our modeling of interband/intervalley scattering (henceforth inter-scattering) expands the phase space owing to the additional band and uses the factor $s_{\text{int}}=0.5$ making it half as strong as intraband/intravalley scattering (henceforth intra-scattering). For comparison, we also provide results obtained with $s_{\text{int}}=0$ (no inter-scattering) in Supplementary Fig. 11. We start from two identical bands with aligned band minima, the first of which is isotropic and fixed while the second band then evolves according to Fig. \ref{fig:bandevolution}b. $E_{\text{F}}$ is again optimized for maximum $zT$. The results are plotted in Fig. \ref{fig:3ddoubleopt}. The left edge of Zone 1 for each plot, where the two bands are identical, represents symmetry-degenerate band pockets. This offers higher $zT$ and PF as compared to the case of a single band (Fig. \ref{fig:3dsingleopt}) though less than by twofold. Two identical bands result in essentially identical $\alpha$ whereas $\sigma$ draws benefits from doubled $n$ somewhat negated by inter-scattering. One question of interest is the effect of increasing the number of identical carrier pockets. It is generally known that the more pockets the better, though it is straightforward even from our simplified analysis that doubling their number does not double the PF or $zT$ due to inter-scattering. For $N_{v}$ band pockets, $n\propto N_{v}$ while $\tau\propto \left(1+s_{\text{int}}(N_{v}-1)\right)^{-1}$. Then $\sigma\propto N_{v}\left(1+s_{\text{int}}(N_{v}-1)\right)^{-1}$, which as $N_{v}$ grows saturates to $s_{\text{int}}^{-1}$. For example, with $s_{\text{int}}=0.5$ that we assume, the maximum PF gain even with an infinite number of identical pockets is a factor of 2. In fact, if inter-scattering is somehow stronger than intra-scattering, $s_{\text{int}}>1$, then $N_{v}$ is detrimental. As such, the benefit of $N_{v}$ is bounded by the degree of inter-scattering, whose minimization should be a priority of multi-band strategies, e.g., by focusing on pockets located at distant pockets in the BZ \cite{bandconvergence}. Furthermore, if $\kappa_{e}>>\kappa_{\text{lat}}$, then $N_{v}$ is rather unimportant for $zT$ because $\kappa_{e}$ increases as much as the PF. If $\kappa_{\text{lat}}=0$, hypothetically, then $N_{v}$ would have no effect as it cancels exactly for the PF and $\kappa_{e}$. Next, keeping the principal first band fixed in shape and maintaining $s_{\text{int}}=0.5$, we make the second band heavier. As it turns heavier isotropically (Zone 1 in Fig. \ref{fig:3ddoubleopt}), $zT$ and the PF increasingly suffer until they sink well below even the values that the fixed principal band alone generates (compared to Fig. \ref{fig:3dsingleopt}). This means that non-symmetry-related, accidentally degenerate pockets harm $zT$ if their band masses in the transport direction ($m_{x}$) are sufficiently different. Two main reasons account for this. As the second band grows heavier in the transport direction ($x$), its direct contribution to transport diminishes. It also indirectly sabotages the lighter principal band by triggering heavier inter-scattering overall. This holds until the second band becomes narrow enough for it to function as a resonance level and selectively scatter low-energy carriers, whereby $zT$ and the PF rebound. They do not fully recover the values generated by the original twin degenerate bands unless DPS or POS dominates. The presence of strong IIS, due to the high impurity concentration required for doping a very heavy band, could eclipse the resonance level effect from manifesting. If the second band evolves anisotropically in the $y$ and/or the $z$ directions, the thermoelectric response is largely similar to what is seen for a single band turning anisotropic. Anisotropy increases $\alpha$ and the PF as well as $zT$ until they plateau. Also, $zT$ is not noticeably higher here than in the case of a single anisotropic band because the anisotropic band dominates transport and $\kappa_{e}>>\kappa_{\text{lat}}$. This again is a nod to the decreasing importance of the band multiplicity if $\kappa_{e}>>\kappa_{\text{lat}}$. Another two-band situation is a semimetallic one in which there exists a `conduction band' and a `valence band' with no gap in between, triggering bipolar transport. Bipolar effect is a significant suppressor of the Seebeck coefficients of metals and small-gap semiconductors. Extrapolating the lessons from above, it is rather straightforward that for $\zeta$ to be large in magnitude (positive or negative), $\Sigma(E)$ must be highly asymmetric about the Fermi level, juxtaposing mobile and anisotropic `conduction' bands against isotropically heavy `valence' bands or vice-versa. We confirm this by fixing the conduction band and evolving the valence band as described in Fig. \ref{fig:bandevolution}c. The results are in Supplementary Fig. 12 and Discussion. It is therefore no surprise that high-performing semimetals and narrow-gap semiconductors feature quite drastic band asymmetries about the Fermi level \cite{cosiyi,cosisame,mos2yi,ybal3epw,thermoelectricsemimetals,asymmetricbands}. \begin{figure*} \includegraphics[width=1 \linewidth]{bandwidth} \caption{Optimum bandwidth, Fermi level, and $zT$. \textbf{a)} $T$-and-$\kappa_{\text{lat}}$-dependent optimum bandwidth and $zT$ under DPS for an isotropic 3D parabolic band of $m_{\text{GaAs}}=0.067$, and \textbf{b)} the optimum Fermi level for each point. The lower the $\kappa_{\text{lat}}$, the lower the optimum $E_{\text{F}}$ and $W_{\text{opt}}$. $\kappa_{\text{lat}}$ is given in W m$^{-1}$K$^{-1}$.} \label{fig:bandwidth} \end{figure*} \subsection{Optimum Bandwidth from Perfect Transport Cutoff} The relative energy ranges from which $\sigma$, $\zeta$, and $\kappa_{e}$ draw contributions imply that the best performance would be obtained by suppressing both low-energy contributions (to suppress Ohmic current) and high-energy contributions (to suppress thermal current) thus limiting transport only to a certain medium-energy range. Accordingly, we investigate the scenario in which the contribution to transport abruptly vanishes at some optimum energy (see the inset in Fig. \ref{fig:bandwidth}a), which we define as the optimum bandwidth ($W_{\text{opt}}$). It essentially represents optimum transport distribution width. Mathematically, $W_{\text{opt}}$ is obtained by solving for the following as to maximize $zT$: \begin{equation}\label{eq:optimizezt} W_{\text{opt}}=\text{argmax}_{W}\left[\frac{\zeta^{2}(W)/\sigma(W)}{\kappa_{e}(W)+\kappa_{\text{lat}}}T\right] \end{equation} where, for instance, $\sigma(W)=\frac{1}{V}\int_{0}^{W} \Sigma(E)\left(-\frac{\partial f}{\partial E}\right)dE$. Finite bandwidth of our definition would arise for a band that is abruptly crossed by numerous perfect energy-filtering states acting as perfect resonance levels, or a band that sharply and discontinuously flattens out. Admittedly neither is achievable to perfection in real life, but that theoretical limit is of our interest. We consider an isotropic parabolic band under DPS and optimize $E_{\text{F}}$. Unlike previous studies, we find that there does exist finite, optimum bandwidth for thermoelectrics that depends on temperature and $\kappa_{\text{lat}}$, as delineated by Fig. \ref{fig:bandwidth}. Achieving $W_{\text{opt}}$ would be a tremendous boost for $zT$. Assuming $\kappa_{\text{lat}}<0.5$ W m$^{-1}$ K$^{-1}$ and $m=0.067$, $W_{\text{opt}}$ elevates $zT$ well beyond 10 - higher than any value attainable through any plain band structures of the previous sections. For given $\kappa_{\text{lat}}$, $W_{\text{opt}}$ generally increases with temperature, as expected from the larger range of carrier excitation at higher temperatures. This implies that achieving $W_{\text{opt}}$ is particularly consequential for low temperatures ($T\le300$ K) where a difference of $0.1\sim0.2$ eV can force a shift in $zT$ by nearly an order of magnitude. As $\kappa_{\text{lat}}$ vanishes, $W_{\text{opt}}$ also vanishes, and $zT$ diverges. This would be the Mahan-Sofo limit, named after their seminal work that deduced widthless band to be optimal, if $\kappa_{\text{lat}}=0$ \cite{bestthermoelectric}. Our recovery of this limit is also evidenced by Supplementary Fig. 8f , in which the `electronic-part' $zT$, labeled $z_{e}T$, diverges to infinity as the band completely flattens out. In the other extreme, as $\kappa_{\text{lat}}$ becomes very high, $W_{\text{opt}}$ diverges, i.s., it is virtually irrelevant for $zT$. Further analysis is provided in Supplementary Discussion. \section{Discussion} Commonly used descriptors for thermoelectric performance include the quality factor (QF), which under DPS is \cite{pbsequalityfactor,qualityfactor} \begin{equation}\label{eq:qualityfactor} \beta=T\frac{2k_{\text{B}}^{2}}{3\pi}\frac{\rho v_{s}^{2}N_{v}}{m\Delta^{2}\kappa_{\text{lat}}}, \end{equation} and the Fermi surface complexity factor \cite{complexity} \begin{equation}\label{eq:complexity} C=N_{v}\left(\frac{2}{3}\left(\frac{m_{\perp}}{m_{\parallel}}\right)^{-\frac{1}{3}}+\frac{1}{3}\left(\frac{m_{\perp}}{m_{\parallel}}\right)^{\frac{2}{3}}\right)^{3/2}. \end{equation} Both metrics promote small effective mass ($m$ or $m_{\parallel}$) and high band multiplicity ($N_{v}$); the latter further promotes band anisotropy $\left(\frac{m_{\perp}}{m_{\parallel}}\right)$. This study serves as a general assessment of these well-known blueprints in thermoelectrics, confirming some while offering fresh perspectives and more complete physical pictures to others. 1. Small $m$ in the transport direction is always better. 2. Band anisotropy is very beneficial, but the extent depends on its type. The advantage of anisotropy draws largely from the fact that $\langle v^{2}(E) \rangle$ rises to steeper, low-dimensional slopes. Bidirectional anisotropy mimicking 1D band structure is particularly beneficial, capable of increasing maximum $zT$ by nearly threefold for a given $m$ in the light direction. 3. Although not captured by Eq. \ref{eq:qualityfactor} or Eq. \ref{eq:complexity}, the gains from pocket multiplicity and band convergence depend on the relative shapes of the bands and what is assumed of interband scattering. A heavier pocket in the presence of a lighter pocket can be detrimental. Metrics such as QF or $C$ always predict better performance in the presence of more bands because they do not possess any component that accounts for inter-scattering or differential intrinsic transport of each bands. The metrics ought take these effects into account by bounding the gain from $N_{v}$, e.g., using a term such as $N_{v}\left(1+s_{\text{int}}(N_{v}-1)\right)^{-1}$ as was previously described. 4. Within the limits of our investigation, the type of scattering mechanism does not play a pivotal role in determining what band structure is optimal for $zT$, except in the context of resonance levels. In other words, the best-performing band structure is for the most part the same under the DPS, POS, or IIS. The type of scattering decides how much $zT$ improves or suffers as a band transforms, but no transformation is decisively beneficial under one scattering regime but decisively detrimental under another. As one exception, resonance levels are beneficial if the dominant scattering mechanism is efficient at energy-filtering - DPS or POS. If an ineffective filtering mechanism, such as IIS, activates comparably or dominates, then resonance levels lose merit. 5. There exist optimum bandwidths to a plain parabolic band if the transport contribution can be, albeit hypothetically, abruptly curtailed at some energy. Optimum bandwidth arises because low-energy states are undesired owing to their large contribution to $\sigma$ and high-energy states are undesired owing to their large contribution to $\kappa_{e}$. $W_{\text{opt}}$ for a given $m$ depends on temperature and $\kappa_{\text{lat}}$. It is small ($< 0.3$ eV) so long as $\kappa_{\text{lat}}<1$ W m$^{-1}$ K$^{-1}$, and can push $zT$ beyond what is normally accessible. We stress that our investigation of optimum bandwidth has distinct characteristics from previous investigations in terms of both approach and conclusion. Mahan and Sofo deduced that a fully localized, widthless transport distribution (a completely flat band) would deliver maximal thermoelectric performance \cite{bestthermoelectric}, but under the assumption that $\kappa_{\text{lat}}=0$. Because $\kappa_{\text{lat}}>0$ in real materials, a widthless band and transport distribution would yield $zT=0$ as $v(E)$ and the PF vanish alike (see Fig. \ref{fig:3dsingleopt}b Zone 1). In later studies of optimum bandwidth, the `full-width' definition of bandwidth, $E_{\text{max}}-E_{\text{min}}$, was adopted \cite{optimalbandwidth,bestbandstructure}. A major limitation of this set-up is that the full-width is inherently coupled with $m$ or the size of the BZ. Because smaller $m$ in the transport direction is always beneficial, bandwidth optimality must be probed independently under a fixed $m$, as done in this study. Indeed, in Ref. \cite{optimalbandwidth}, it was determined that optimum full-width does not exist (is infinite) under $\tau(E)\propto D^{-1}(E)$ as $zT$ continues to increase with larger full-width, likely due to concomitantly decreasing $m$. In contrast to these studies, we herein find temperature-dependent, finite optimum bandwidth in the presence of finite $\kappa_{\text{lat}}$. Our bandwidth represents scenarios whereby a band flattens abruptly or features such as high-energy resonance levels are engineered. Our conclusions are more practically relevant than vanishing bandwidth under zero $\kappa_{\text{lat}}$ or infinite full-width that is coupled with $m$ or the BZ size. 6. Though more of a philosophical point, we propose that analysis of thermoelectrics be more frequently framed in terms of the `thermoelectric conductivity,' $\zeta$, which offers more straightforward insights than that framed in terms of the Seebeck coefficient and Ohmic conductivity. By juxtaposing $\zeta$ against $\sigma$ and $\kappa_{e}$, it becomes clear that a band must develop high $\Sigma(E)$ in the mid-energy region to be optimal for thermoelectric application. Our finding of finite optimum bandwidth resonates with this intuition. Reflection on real materials is also in order. In spite of the theoretically remarkable performance of extremely anisotropic, `flat-and-dispersive band structures,' they in practice would be subject to a disadvantage due to polycrystallinity of commercial-scale materials as well as symmetry considerations. Indeed, no candidate materials thus far not achieved the high $zT$ modeled here, and we project why in light of our modeling. We distinguish bands in a cubic cell from those in a non-cubic cell. In a non-cubic cell, a flat-and-dispersive band limits light transport to only certain direction(s). Assuming polycrystallinity, conductivity through a series of differently oriented grains is best described by the lower Wiener bound for composite media, i.e., the harmonic average of directional conductivities \cite{heterogeneousmedia}. Due to poor conductivities along the heavy branch(es), the harmonic average seriously hampers the overall performance under our model, as described in Supplementary Fig. 9. An anisotropic band is then never as good as its isotropic counterpart whose polycrystalline-averaged conductivities are identical to those in any principal direction. Bi$_{2}$PdO$_{4}$ \cite{bi2pdo4prediction} and BaPdS$_{2}$ \cite{bapds2flatdispersive} are good examples as neither compound is cubic but exhibit bidirectional-anisotropic flat-and-dispersive valence bands. The DOS profiles are characterized by peak-like protrusions near the band edges followed by decays, confirming the 1D-like band structure. Polycrystalline Bi$_{2}$PdO$_{4}$ has been experimentally synthesized and investigated, but recorded rather disappointing $p$-type PF (1 mW m$^{-1}$ K$^{-2}$) and $zT$ (0.06) \cite{bi2pdo4experiment}. Because electronic transport is mobile only in one direction and inhibited in the two heavy directions by design, it is unlikely that their presumably high thermoelectric potential in the light direction would shine through unless the sample is a single crystal. BaPdS$_{2}$ has not yet been tested, but it is reasonable to hypothesize that it may exhibit a similar behavior. Under cubic symmetry, all three principal directions are guaranteed the same number of light and heavy branches. Polycrystallinity may then be irrelevant here, but now the concern is that the coexistence of light and heavy branches in the direction of transport (as opposed to light in the transport direction, heavy in other directions) with inter-scattering between them can be inherently limiting (recall Zone 1 in Fig. \ref{fig:3ddoubleopt}). Relevant cases are Fe-based full-Heuslers \cite{fe2yz} and perovskite SrTiO$_{3}$ \cite{srtio3lowdimensional}, which exhibit unidirectional flat-and-dispersive conduction bands, with the 2D-like, precipitous DOS at the band edge. Fe$_{2}$TiSi, a member of the former family, is particularly intriguing because its flat-and-dispersive conduction bands are opposed by triply-degenerate, isotropic valence bands, offering a direct comparison of the $n$-type and $p$-type performances of the respective band structures. According to our DFT \cite{dft} calculation using the PBE functional \cite{pbe}, the lowermost conduction band is very flat along $\Gamma-X$ whose energy width is 0.05 eV ($m_{\parallel}\approx 41$) and dispersive in other directions ($m_{\perp}\approx0.7$). A second isotropic conduction band ($m\approx0.4$) is degenerate at $\Gamma$. Opposing them are three isotropic valence bands with comparable $0.4\le m \le 0.75$. Theoretical thermoelectric properties were studied with rigorous first-principles treatment of electron-phonon scattering \cite{ba2biau}, but the $n$-type PF (5 mW m$^{-1}$ K$^{-2}$ at 300 K) was predicted to be only barely higher than the $p$-type PF (4 mW m$^{-1}$ K$^{-2}$ at 300 K), with no sign of the pronounced performance promised by the flat-and-dispersiveness. As for SrTiO$_{3}$, according to our DFT calculation, the width of the heavy branch of the lowermost conduction band along $\Gamma-X$ is approximately 0.1 eV ($m_{\parallel}\approx 7$) while in the dispersive directions $m_{\perp}\approx 0.8$. Two additional relatively isotropic conduction bands ($0.4\le m \le0.7$) disperse from $\Gamma$ at the CBM. Multiple experimental reports exist for SrTiO$_{3}$ on single crystals, which should be the best-performing and the most comparable to theory results. Although respectable $n$-type PFs of 3.6 mW m$^{-1}$ K$^{-2}$ \cite{srtio3experiment1} and 2.3 mW m$^{-1}$K$^{-2}$ \cite{srtio3experiment2} have been recorded at room temperature, neither are these values anywhere near what Fig. \ref{fig:3dsingleopt} promises. These observations collectively suggest that cubic symmetry may cap the full potential of the flat-and-dispersive bands in real materials. As a separate point, it would certainly help if the band masses in the light direction of the both compounds were much smaller. As a final deliberation, we address the question, what then is the optimal band structure, all things considered? The literature has convincing cases for both an extremely anisotropic, flat-and-dispersive band and a band with multiple dispersive pockets at off-symmetry points. Reflecting on our modeling, we conclude the following. For a single band, if a bidirectional flat-and-dispersive band attaining $\frac{m_{\perp}}{m_{\parallel}}\approx1000$ with as small a possible $m_{\parallel}$ can be realized in a single crystal, it would constitute the optimal single band structure. Otherwise, a band with multitude of dispersive pockets at off-symmetry points with weak inter-scattering would be the best targets, as they provide moderate anisotropy and would be more immune to polycrystallinity. For a given single band, presence of additional bands with equally light mass in the transport direction would increases performance, though this benefit is negligible if $\kappa_{e}>\kappa_{\text{lat}}$ and/or if $s_{\text{int}}\approx1$. For a given overall intrinsic band structure, resonance levels and optimum bandwidth further improves performance, the latter being capable of boosting $zT$ to the highest values of all band structure designs considered here and particularly consequential for low-temperature operation. As efforts to discover and design thermoelectrics with $zT>3$ continue, the blueprints for high performance grow increasingly influential. Common rules regarding beneficial band structures for bulk thermoelectrics are largely drawn from simple band models without realistic scattering. Using a straightforward but improved approach, we herein fine-tune those blueprints while proposing optimal band structures and design principles along the way. Our generalized findings from this modeling study are mutually supportive of and consistent with the findings from recent targeted studies of high-performing materials with high-fidelity first-principles computations \cite{ba2biau,heuslers,cosiyi,mos2yi,analoguepbte}. We hope the theoretical investigations of the present study help the community navigate rationally towards next-generation thermoelectrics. \section{Methods} The Hartree atomic units ($\hbar=m_{e}=a_{o}=q=4\pi\epsilon_{0}=1$) are used throughout the methods section. All calculations are performed on a set of in-house Mathematica codes. \subsection{Band Structure} To generate a realistic solid-state band structure, a band is created by smoothly connecting an upward paraboloid to an inverted paraboloid at a selected inflection point. The advantages of such a band structure include: 1) it remains faithful to solid-state band theory that requires a band to cross the zone-boundary orthogonally, save for when crystal and orbital symmetries allow band crossing or degeneracy at the zone boundary (e.g. in graphene); 2) it formally maintains the validity of effective-mass-based description of the band throughout; 3) relatively simple analytic models of scattering can be directly applied to the upward-parabolic portion, and can be applied with modification for the inverted-parabolic portion; and 4) it can be used to explore a wide range of band structure shapes by modulating the points of inflection. The equation for this band structure is \begin{widetext} \begin{equation}\label{eq:band} \begin{aligned} E=E_{0}&+\left(\frac{\text{Min}(|k_{x}|,\tilde{k_{x}})}{2m_{\text{up},x}}+\frac{\tilde{k_{x}}^{2}-(G_{x}-\text{Max}(|k_{x}|,\tilde{k_{x}}))^{2}}{2m_{\text{down},x}}\right) +\left(\frac{\text{Min}(|k_{y}|,\tilde{k_{y}})}{2m_{\text{up},y}}+\frac{\tilde{k_{y}}^{2}-(G_{y}-\text{Max}(|k_{y}|,\tilde{k_{y}}))^{2}}{2m_{\text{down},y}}\right) \\ &+\left(\frac{\text{Min}(|k_{z}|,\tilde{k_{z}})}{2m_{\text{up},z}}+\frac{\tilde{k_{z}}^{2}-(G_{z}-\text{Max}(|k_{z}|,\tilde{k_{z}}))^{2}}{2m_{\text{down},z}}\right) \\ &\pm \left(\frac{|\tilde{k_{x}}^{2}-(G_{x}-\tilde{k_{x}}^{2})^{2}|}{2m_{\text{down},x}}+\frac{|\tilde{k_{y}}^{2}-(G_{y}-\tilde{k_{y}}^{2})^{2}|}{2m_{\text{down},y}}+\frac{|\tilde{k_{z}}^{2}-(G_{z}-\tilde{k_{z}}^{2})^{2}|}{2m_{\text{down},z}}\right). \end{aligned} \end{equation} \end{widetext} Here, $\tilde{k_{x}}$ denotes the inflection point in the $x$-direction, and $G_{x}$ is the reciprocal lattice vector in the $x$-direction, i.e., the BZ boundary in the $x$-direction. If inflection occurs at halfway to the BZ boundary, then $\tilde{k_{x}}=G_{x}/2$. The last terms are subtracted ($-$) if $\tilde{k_{x}}\ge G_{x}/2$, and added ($+$) if $\tilde{k_{x}}<G_{x}/2$. The same is true in the $y$ and the $z$ directions. Effective masses of the inverted parabolic portion ($m_{\text{down}}$) are obtained by enforcing derivative continuity at the inflection point in every direction. Although a broad range of band shapes could be explored by changing the inflection point $\tilde{k}$ in any three Cartesian directions, in this work we limit ourselves to bands that inflect halfway to the zone boundary in all three directions. Under this assumption, the effective mass (inverse curvature) profiles of the upward paraboloid portion and the inverted paraboloid portion are identical ($m_{\text{up}}=m_{\text{down}}$), rendering the entire band structure describable with one common set of directional effective masses. We note that this band could also serve as a first-order approximation of the tight-binding cosine band. We create these model bands centered at $\Gamma$ in a simple cubic Brillouin zone corresponding to an arbitrary lattice parameter of $15$ $a_{o}$ (Bohr radius) $\sim 7.9$ \r{A}, which is a reasonable lattice parameter for a real thermoelectric. The density of states (DOS) is calculated using the tetrahedron method \cite{tetrahedron}. Detailed diagrams of the band structure and DOS are given in Supplementary Figs. 1 and S2. \subsection{Carrier Scattering and Transport} Thermoelectric properties are computed by numerically integrating Eqs. \ref{eq:sigma}--\ref{eq:kappae}. The BZ is sampled to convergence with a \textbf{k}-point mesh of $40\times40\times40$. We fix the effective mass of the principal band in the transport direction ($m_{x}$), unless it evolves isotropically, to enable fair comparison of performance of various band structures. We also ignore bipolar transport except in the two-band case with a conduction and a valence band. This is roughly tantamount to assuming a band gap larger than 0.4 eV - the maximum energy range of thermal excitation when the Fermi level is placed at the band minimum. The ultimate objective is to determine band structures that theoretically maximize thermoelectric performance, and for that, some settings in place are those known to be beneficial for thermoelectrics. For instance, we intentionally fix $\kappa_{\text{lat}}$ to a low value of 0.5 W m$^{-1}$ K$^{-1}$ as is the case in many phonon-glass materials \cite{tetrahedrite,lonepair1,snsenature,clathrateapl,tl3vse4science}. We fix the principal band mass in the transport direction ($m_{x}$) to a small value of 0.05, in the range of GaAs (0.067) and InSb (0.014). Band anisotropy is also taken to the extreme to explore the limits of its benefits. These sometimes lead to prediction of higher $zT$ ($\sim 9$) than commonly encountered in the literature. Though these settings are optimistic and difficult to simultaneously satisfy experimentally, they are far from unrealistic, as materials with lower $m$ or $\kappa_{\text{lat}}$ are known. They constitute the right regime of exploration in the discourse of high-performance thermoelectrics, and provide some estimate of the realistic upper limit of bulk thermoelectric performance. To calculate $\tau$, various scattering mechanisms, namely deformation-potential scattering (DPS) by (acoustic) phonons, polar-optical scattering (POS), and ionized-impurity scattering (IIS), are treated according to well-established formalisms \cite{ziman,nolassharpgoldsmid,lundstrom,frolich,brooks,ionizedimpurity} with appropriate adjustments to account for anisotropy, inverted parabolicity and the BZ-bounded nature of our band structures, with details to follow in the next subsections. Once the lifetimes under the three mechanisms are calculated, the overall $\tau$ is estimated by Matthiesen's rule \cite{matthiessen}. Supplementary Fig. 3 plots the scattering profiles of the three processes based on Eqs. \ref{eq:defpottau}, \ref{eq:postaunew}, and \ref{eq:brooksherringnew}. For the isotropic cases, they exhibit excellent agreement with the corresponding parabolic band formulations. DPS exhibits the expected $\tau^{-1}\propto D(E)$ relation. POS is characterized by the emission onset near the band edge followed by a gradual decay. IIS reproduces the Brooks-Herring behavior. \subsection{Deformation-Potential Scattering} The theory of deformation potential was originally developed by Bardeen and Shockley for long-wavelength acoustic phonon scattering with the assumption of elasticity \cite{defpotbardeenshockley}. Combining it with the generalized deformation potential by Kahn and Allen \cite{defpotkahnallen}, we write the DPS lifetimes as follows: \begin{equation}\label{eq:defpottau} \tau_{\text{DPS},\mathbf{k}}=\frac{\rho v_{s}^{2}}{\pi k_{\text{B}}T(\Delta+m\mathbf{v_{k}}\cdot\mathbf{v_{k}})^{2}}D^{-1}(E_{\mathbf{k}}), \end{equation} where $\Delta$ is the Bardeen-Shockley deformation potential at the band edge (band shift with lattice deformation) and the Kahn-Allen correction term $m\mathbf{v_{k}}\cdot\mathbf{v_{k}}$ accounts for the shift in the reciprocal space vectors with deformation, which grows large near the zone boundary. When more than one bands/pockets are present, interband/intervalley scattering must be accounted for. The matrix elements for inter-scattering are not obtainable without the details of the phonon spectrum and the Hamiltonian, which we lack because we are not studying a real system. Nevertheless, it is reasonable to assume that inter-scattering is generally weaker than intra-scattering because the wavefunction overlap between distinct bands or pockets is generally weaker than that within a band, and inter-scattering is often heavily reliant on zone-boundary phonons, which are less populated than zone-center phonons. To account for the strength of inter-scattering as such, we introduce a parameter, $s_{\text{int}}$, that acts as the lumped effect of the above-mentioned considerations. This quantity modulates the inter-scattering strength relative to intraband scattering. We set $s_{\text{int}}=0.5$ to reflect the usually weaker inter-scattering, which allows for trivial extension of Eq. \ref{eq:defpottau} by accounting for the added phase space due to the second band proportional to its DOS. The overall DPS lifetime, for band 1, now becomes \begin{widetext} \begin{equation}\label{eq:intervalley} \tau_{\text{DPS},1\mathbf{k}}=\frac{\rho v_{s}^{2}}{\pi k_{\text{B}}T (\Delta+m_{1}\mathbf{v}_{1\mathbf{k}}\cdot\mathbf{v}_{1\mathbf{k}})^{2}} \left[ D_{1}(E_{1\mathbf{k}})+ s_{\text{int}}D_{2}(E_{1\mathbf{k}}) \right]^{-1}, \end{equation} \end{widetext} where the subscripts 1 and 2 indicate each of two bands. The addition of inter-scattering phase space is determined by the presence of second band states at given given $E_{1}$. That is, if $D_{2}(E_{1\mathbf{k}})=0$, then there is no inter-scattering. In spite of its simplicity, the above formulations work well also for zone-boundary phonon scattering. For instance, Fe$_{2}$TiSi conduction bands are very flat-and-dispersive, allowing significant zone-boundary phonon scattering as well as intervalley/interband scattering. Accurately calculated scattering rates of Fe$_{2}$TiSi using DFT band structures and electron-phonon matrix elements \cite{epw1,epw3} behave essentially as $\tau^{-1}(E)\propto D(E)$ (see Supplementary Fig. 4a) \cite{ba2biau}, which is also the behavior predicted by Eqs. 10-11. The same conclusion holds for the flat-and-dispersive valence bands of Li$_{2}$TlBi \cite{analoguepbte}. Therefore, the phenomenological treatment by Eqs. \ref{eq:defpottau}--\ref{eq:intervalley} capture the essence of intra- and inter-scattering even in cases with extreme anisotropy. \subsection{Polar-Optical Scattering} We modify the established formula for polar-optical scattering \cite{lundstrom,nolassharpgoldsmid} as to reasonably accounts for band anisotropy and inverted parabolicity: \begin{widetext} \begin{equation}\label{eq:postaunew} \tau_{\text{POS},\mathbf{k}}=\frac{|\mathbf{v_{k}}|}{2\omega_{o}} \left(\frac{1}{\epsilon_{\infty}}-\frac{1}{\epsilon}\right)^{-1} \left[(b(\omega_{o})+1)\cdot\text{sinh}^{-1}\left(\frac{D(E_{\mathbf{k}}-\omega_{o})}{D(\omega_{o})}\right) + b(\omega_{o})\cdot\text{sinh}^{-1}\left(\frac{D(E_{\mathbf{k}})}{D(\omega_{o})}\right)\right]^{-1}, \end{equation} \end{widetext} Modifications come from exchanging $\sqrt{E}$ with our tetrahedron-integrated $D(E)$ and using the \textbf{k}-dependent prior form (instead of the less general $E$-dependent form - see Supplementary Discussion). Use of the group velocity norm ($|\mathbf{v_{k}}|$) obviates the dependence on $m$ and $E$ through the relation $|\mathbf{v_{k}}|=\sqrt{\frac{2E}{m}}$ for an isotropic parabolic band. The optical phonon frequency is represented by $\omega_{o}$, and $b(\omega_{o})$ is the Bose-Einstein population. The left term in the square brackets accounts for phonon emission whereas the right term accounts for phonon absorption. For our band structures, Eq. \ref{eq:postaunew} is exact in the upward-parabolic portions up to the inflection point, and past it, approximates the true lifetimes. When more than one band pocket exists at one \textbf{k}-point, nothing prohibits interband POS from occuring. To account for this, we take the same approach as we do with DPS and use $D(E)=D_{1}(E)+s_{\text{int}}D_{2}(E)$ to enlarge the phase space. \subsection{Ionized-Impurity Scattering} We use a modified version of the Brooks-Herring formula \cite{brooks,ionizedimpurity} that reasonably account for band anisotropy and inverted parabolicity: \begin{equation}\label{eq:brooksherringnew} \tau_{\text{IIS},\mathbf{k}}=\frac{\epsilon^{2}|\mathbf{k}|^{4}}{2\pi^{3}N_{i}Z^{2}}D^{-1}(E_{\mathbf{k}})\left(\text{log}(1+\gamma_{\mathbf{k}})-\frac{\gamma_{\mathbf{k}}}{1+\gamma_{\mathbf{k}}}\right)^{-1}, \end{equation} where the screening term is \begin{equation}\label{eq:gammanew} \gamma_{\mathbf{k}}=\frac{4|\mathbf{k}|^{2}\epsilon k_{\text{B}}T}{n}\left(\frac{F_{\frac{1}{2}}(E_{\text{F}})}{F_{-\frac{1}{2}}(E_{\text{F}})}\right). \end{equation} Modification comes from using the \textbf{k}-dependent prior form of the Brooks-Herring formula (instead of its typical energy-dependent form - see Supplementary Methods) and replacing the terms in it that represent the parabolic DOS with our tetrahedron-integrated $D(E)$. For our band structures, this corrected formula is again exact for the upward-parabolic portions from $\Gamma$ to the inflection point, and past it, closely approximates the true lifetimes. When more than one band pocket exists at the same \textbf{k}-point, interband IIS can take place. To account for this, we again use $D(E)=D_{1}(E_{1\mathbf{k}})+s_{\text{int}}D_{2}(E_{1\mathbf{k}})$ to modify the phase space. We assume that one impurity donates or accepts one charge, meaning the effective impurity charge of $Z=1$. This choice forces that the carrier concentration ($n$) effectively equals the impurity concentration ($n=N_{i}$) at appreciable doping levels. \subsection{Materials parameters} To calculate specific values of scattering, material-dependent quantities that control the relative strength of the three scattering mechanisms must be chosen. These include the deformation potential, the dielectric constants, and the optical phonon frequency among others. We select plausible values for these quantities that occur prevalently in real materials, as listed in Table \ref{tab:quantities}. In emphasis, the choice of these values renders the relative strength of each scattering channel arbitrary. What is not arbitrary is the characteristic thermoelectric behavior of bands under a given scattering regime. \begin{table} \begin{tabular}{|ccc|} \hline \textbf{Arbitrary Quantity} & \textbf{Symbol} & \textbf{Value} \\ \hline Temperature & $T$ & 500 K \\ Density & $\rho$ & 5000 kg/m$^{3}$ \\ Sound Velocity & $v_{s}$ & 4000 m/s \\ Deformation Potential & $\Delta$ & 0.4 Ha = 10.8 eV \\ Interband Scattering Strength & $s_{\text{int}}$ & $0.5$ \\ Static Dielectric Constant & $\epsilon$ & 30 \\ High-freq. Dielectric Constant & $\epsilon_{\infty}$ & 25 \\ Optical Phonon Frequency & $\omega_{o}$ & $k_{\text{B}}T/2$ \\ Effective Charge of an Impurity & $Z$ & 1 \\ Impurity Concentration & $N_{i}$ & $n/Z$ \\ Lattice Thermal Conductivity & $\kappa_{\text{lat}}$ & 0.5 Wm$^{-1}$K$^{-1}$ \\\hline \end{tabular} \caption{Arbitrary quantities used in the scattering and transport models.} \label{tab:quantities} \end{table} \section{Data Availability} The data can be either reproduced or generated using any user-desired parameters using publicly available Mathematica notebooks (see below). \section{Code Availability} The Mathematica notebooks are publicly available on the following link: https://github.com/jsyony37/bandmodel. \section{Acknowledgments} This work was led by fundings from U.S. Department of Energy, Office of Basic Energy Sciences, Early Career Research Program, which supported J. P. and A. J. Lawrence Berkeley National Laboratory is funded by the Department of Energy under award DE-AC02-05CH11231. V. O. acknowledges financial support from the National Science Foundation Grant DMR-1611507. This work used resources of the National Energy Research Scientific Computing Center, a Department of Energy Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. J. P. thanks Younghak Kwon of UCLA Mathematics for helpful discussions. \section{Supplementary Figures} \begin{figure}[hp] \centering \includegraphics[width=1 \linewidth]{pinvp} \caption{Contour plots of the paraboloid + inverted paraboloid band structure with inflection points at 0.25, 0.5, 0.75 way to the BZ boundaries. In the top row, $m_{x}=m_{y}=m_{z}=0.05$ (isotropic). In the middle row, $m_{x}=m_{y}=0.05$ and $m_{z}=5$. In the bottom row, $m_{x}=0.05$ and $m_{y}=m_{z}=5$. The energy scale is in Hartree.} \label{fig:pinvp} \end{figure} \newpage \begin{figure}[hp] \centering \includegraphics[width=1 \linewidth]{pinvpdos} \caption{The density of states (\color{red}red\color{black}) of paraboloid + inverted paraboloid band structure with inflection points at 0.25, 0.5, 0.75 way to the BZ boundaries. In top row, $m_{x}=m_{y}=m_{z}=0.05$ (isotropy). The parabolic DOS of identical effective mass are plotted in black ($\sim\sqrt(E)$), which agree well with the initial upward-parabolic portion. For the isotropic band inflecting at 0.5 BZ, the similar tight-binding cosine DOS is plotted in \color{blue}blue \color{black} for comparison. In middle row, $m_{x}=m_{y}=0.05$ and $m_{z}=5$ (unidirectional anisotropy), and the DOS onsets resemble the two-dimensional precipice.. In bottom row, $m_{x}=0.05$ and $m_{y}=m_{z}=5$ (bidirectional anisotropy), and the DOS onsets resemble the one-dimensional peak. } \label{fig:pinvpdos} \end{figure} \newpage \begin{figure*}[hp] \centering \includegraphics[width=1 \linewidth]{scatter} \caption{The \textbf{k}-dependent scattering rates plotted against energy for the paraboloid + inverted paraboloid band structure inflecting halfway to the zone-boundary. The vertical dashed lines demarcate the end of the purely upward-parabolic regime. \textbf{a, d, g)} Deformation-potential scattering (Eq. 11 in the main text), \textbf{b, e, h)} polar-optical scattering (Eq. 14 in the main text), and \textbf{c, f, i)} ionized-impurity scattering (Eq. 18 in the main text). \textbf{a-c)} Isotropic case, where $m_{x}=m_{y}=m_{z}=0.05$. Overlaid in solid lines are the energy-dependent scattering rates for a generic parabolic band of the same effective mass profile given by Eqs. 11, 13, and 15 in the main text. The agreements are essentially perfect in the upward-parabolic regime (before the dotted vertical lines), followed by deviations due to the downward inflection of our band. \textbf{d-f)} One-way-anisotropic case, where $m_{x}=m_{y}=0.05$ and $m_{z}=0.5$. \textbf{g-i)} Two-way-anisotropic case, where $m_{x}=0.05$ and $m_{y}=m_{z}=0.5$.} \label{fig:scatter} \end{figure*} \newpage \begin{figure}[hp] \centering \includegraphics[width=0.8 \linewidth]{scatteringsupp.pdf} \caption{The band structure and electron-phonon scattering rates of Fe$_{2}$TiSi, with DOS overlaid in \color{red}red\color{black}. The scattering rates are calculated with the EPW software. The flat-and-dispersive conduction bands are dominated by DPS, which is essentially proportional to DOS. The valence bands are dominated by POS and does not scale as DOS as well \cite{ba2biau}.} \label{fig:scatteringsupp} \end{figure} \vspace{5cm} \begin{figure}[hp] \centering \includegraphics[width=0.8\linewidth]{seebeckef0.pdf} \caption{The Seebeck coefficient in the light direction ($x$) with the Fermi level fixed to the band minimum, as a function of changing effective masses in three directions. \textbf{a)} Single band evolves as depicted in Fig. 1a of the main text. \textbf{b)} Two bands exist where the second band evolves as depicted in Fig. 1b of the main text. Each zone indicates certain characteristic evolution: isotropic increase in $m$ from 0.05 to 500 in Zone 1, anisotropic increase in $m_{y}$ from 0.05 to 500 in Zone 2, anisotropic increase in both $m_{y}$ and $m_{z}$ in Zone 3 from 0.05 to 500. Four different scattering regimes are considered: the POS limit (\color{blue}blue\color{black}), the IIS limit (\color{green}green\color{black}), the DPS limit (\color{red}red\color{black}), and the overall effect (black). The black dashed horizontal line marks the isotropic value.} \label{fig:3dsingleef0} \end{figure} \newpage \begin{figure}[hp] \centering \includegraphics[width=0.85 \linewidth]{anisotropyprofiles} \caption{Schematics of factors that render band anisotropy's effect on thermoelectric performance beneficial or harmful. \textbf{a)} Evolution of moving-averaged group velocities in the $x$ direction, $\langle v_{x}^{2}(E) \rangle$, for one-way anisotropy. \textbf{b)} Evolution of moving-averaged $\langle v_{x}^{2}(E) \rangle$ for two-way anisotropy. \textbf{c)} Dispersions of two paraboloid + inverted paraboloid bands, one isotropic (blue) and one very anisotropic (orange). Black circles mark the regions ``low-energy voids," where low-energy states exist for the latter but are absent for the former.} \label{fig:anisotropyprofiles} \end{figure} \newpage \begin{figure}[hp] \centering \includegraphics[width=0.8 \linewidth]{3dsingleoptlow.pdf} \caption{Same as Fig. 2 in the main text but with a larger $m_{x}=0.1$ and a higher $\kappa_{\text{lat}}=1$ W m$^{-1}$ K$^{-1}$.} \label{fig:3dsingleoptlow} \end{figure} \vspace{2cm} \begin{figure}[hp] \centering \includegraphics[width=1 \linewidth]{3dsingleoptsupp} \caption{Supplement to Fig. 2 in the main text. Electronic transport properties in the light direction ($x$) for the single-band case, where horizontal dashed lines indicate the initial value at $m_{x}=m_{y}=m_{z}=0.15$. \textbf{a)} The charge (Ohmic) conductivities. \textbf{b)} The thermoelectric conductivities. \textbf{c)} The electronic thermal conductivities. \textbf{d)} Mobilities, which in the isotropic region decrease as $\mu_{\text{DPS}}\sim m^{-2.49}$, $\mu_{\text{POS}}\sim m^{-1.49}$, and $\mu_{\text{IIS}}\sim m^{-1.88}$. \textbf{e)} The Lorenz numbers, where the dotted line indicates the Wiedemann-Franz value ($L_{0}=2.44\times10^{-8}$ W$\Omega$K$^{-2}$). \textbf{f)} The electronic-part $zT$.} \label{fig:3dsingleoptsupp} \end{figure} \newpage \begin{figure}[hp] \centering \includegraphics[width=1 \linewidth]{polycrystalline} \caption{Polycrystalline thermoelectric properties of single band approximated by taking the harmonic mean of directional properties, supplemental electronic transport properties to Fig. 3 in the main text. \textbf{a)} $zT$, \textbf{b)} the PF, \textbf{c)} the Seebeck coefficient. The anisotropic zones 2 and 3 are marked by significantly worse performance compared to the single-light-direction performance provided in Fig. 2 in the main text. The isotropic zone 1 is identical to the single-direction trend.} \label{fig:polycrystalline} \end{figure} \vspace{5cm} \begin{figure}[hp] \centering \includegraphics[width=1 \linewidth]{3ddoubleoptsupp} \caption{Supplement to Fig. 3 in the main text. Electronic transport properties in the light direction ($x$) for the two-band case, where horizontal dashed lines indicate the initial values for the single band case at $m_{x}=m_{y}=m_{z}=0.05$. \textbf{a)} The charge (Ohmic) conductivities. \textbf{b)} The thermoelectric conductivities. \textbf{c)} The electronic thermal conductivities. \textbf{d)} Mobilities, which in the isotropic region decrease as $\mu_{\text{DPS}}\sim m_{2}^{-2.46}$, $\mu_{\text{POS}}\sim m_{2}^{-1.37}$, and $\mu_{\text{IIS}}\sim m_{2}^{-1.81}$ where $m_{2}$ is the effective mass of the evolving second band. \textbf{e)} The Lorenz numbers, where the upper dotted horizontal line indicates the Wiedemann-Franz value ($L_{0}=2.44\times10^{-8}$ W$\Omega$K$^{-2}$). \textbf{f)} The electronic-part $zT$.} \label{fig:3ddoubleoptsupp} \end{figure} \newpage \begin{figure}[hp] \centering \includegraphics[width=0.8 \linewidth]{3ddoubleoptnoint} \caption{Same as Fig. 3 in the main text but with no interband scattering ($s_{\text{int}}=0$).} \label{fig:3ddoubleoptnoint} \end{figure} \vspace{3cm} \begin{figure}[hp] \centering \includegraphics[width=1 \linewidth]{3dbipolar} \caption{Thermoelectric properties in the light direction ($x$) with bipolar effect (zero band gap) as the valence band effective masses evolve and the conduction band is fixed at $m_{x}=m_{y}=m_{z}=0.05$, as in Fig. 1d in the main text. \textbf{a)} The power factor, \textbf{b)} the Seebeck coefficient, and \textbf{c)} $zT$. Each zone indicates certain characteristic evolution of the valence band: isotropic increase in $m$ from 0.05 to 500 in Zone 1, anisotropic increase in $m_{y}$ from 0.05 to 500 in Zone 2, anisotropic increase in both $m_{y}$ and $m_{z}$ in Zone 3 from 0.05 to 500. Only DPS is considered due to the metallic assumption (very high carrier concentrations). The red horizontal dashed lines indicate the single-conduction-band (i.e. insulating state) values under DPS scattering only, and the black horizontal dashed lines indicate those under the overall effect of POP and IIS as well as DPS.} \label{fig:3dbipolar} \end{figure} \newpage \begin{figure}[hp] \includegraphics[width=0.8 \linewidth]{bandwidthef0} \caption{$T$-and-$\kappa_{\text{lat}}$-dependent optimum bandwidth and $zT$ under DPS and fixed $E_{\text{F}}=0$ for \textbf{a)} an isotropic 3D parabolic band of $m_{\text{GaAs}}=0.067$, and \textbf{b)} an isotropic 3D quartic with the same energy as the parabolic band at the inflection point. The Fermi level here is fixed at the respective CBM for comparison purposes. $\kappa_{\text{lat}}$ is given in W m$^{-1}$K$^{-1}$.} \label{fig:bandwidthef0} \end{figure} \newpage \section{Supplementary Discussion} A review of the single parabolic band model and the Pisarenko formulas for the Seebeck coefficient is in order owing to their generality, wide reference, and limitations our study improves upon. The Seebeck coefficient is, in the degenerate or metallic limit, \begin{equation}\label{eq:pisarenko1} \alpha=\frac{\pi^{2}k_{\text{B}}^{2}T}{3E_{\text{F}}}=\frac{2k_{\text{B}}^{2}}{3}mT\left(\frac{\pi}{3n}\right)^{\frac{2}{3}}, \end{equation} and in a non-degenerate case, \begin{equation}\label{eq:pisarenko2} \begin{aligned} \alpha&=k_{\text{B}}\left(\frac{5}{2}+s-\frac{E_{\text{F}}}{k_{\text{B}}T}\right) \\ &=k_{\text{B}}\left(\frac{5}{2}+s+\text{log}\left(n^{-1}\left(\frac{mk_{\text{B}}T}{2\pi}\right)^{\frac{3}{2}}\right)\right), \end{aligned} \end{equation} where $n$ is carrier concentration, $m$ is the band effective mass, $k_{\text{B}}$ is the Boltzmann constant, and $s$ is the power of energy to which carrier lifetimes are proportional ($\tau\propto E^{s}$). Contrary to a widely held notion, a heavier band (high $m$) does not by default generate higher Seebeck coefficient under this model since $\alpha$ can be written as a function of either $E_{\text{F}}$ or $m$-and-$n$, where the trivial exchange of variables is allowed by their interrelations: \begin{equation}\label{eq:fermidegenerate} E_{\text{F}}=\frac{\pi^{2}}{2m}\left(\frac{3n}{\pi}\right)^{\frac{2}{3}} \end{equation} in the degenerate case, and \begin{equation}\label{eq:ferminondegenerate} \frac{E_{\text{F}}}{k_{\text{B}}T }=\text{log}\left(n^{-1}\left(\frac{mk_{\text{B}}T}{2\pi}\right)^{\frac{3}{2}}\right) \end{equation} in the non-degenerate case. Equations~\ref{eq:pisarenko1}--\ref{eq:pisarenko2} stipulate that, if $E_{\text{F}}$ is fixed, then $\alpha$ is constant whatever the $m$ value because $n$ would change accordingly. A light band with a small $m$ would produce the same Seebeck coefficient as a heavier band with a larger $m$ provided that $E_{\text{F}}$ is kept fixed. The lead-up to Eqs. \ref{eq:pisarenko1}--\ref{eq:pisarenko2} bears one hidden but critical assumption: that there is always enough (infinite) dispersion in all directions to cover the entire energy range relevant to thermoelectric phenomena. However, it breaks down when a band becomes critically heavy that it reaches the BZ boundary before gaining enough energy to cover the entire relevant energy range for transport. Also, because a true solid-state band must at some point change in curvature from positive to negative and cross the BZ boundary orthogonally, there comes a point where assuming constant positive curvature introduces additional unrealistic effects. As will be seen, these effects lead to some conclusions that deviate from what would otherwise be drawn from Eqs. \ref{eq:pisarenko1}--\ref{eq:pisarenko2}. Other assumptions in typical parabolic band models include absence of opposing bands (bipolar effect) and a monotonically behaved, or at least slow-varying, $\Sigma(E)$ such that the Sommerfeld expansion is valid, a requirement for arriving at Eqs. \ref{eq:pisarenko1}--\ref{eq:pisarenko2}. To demonstrate the importance of our method, we calculate our model-predicted Seebeck coefficient $\alpha$ with a fixed $E_{\text{F}}$ at the band minimum. This exercise helps clarify the difference in behavior of our model versus previous models including SPB, and also clearly illustrates the concept of improving $\zeta$ relative to $\sigma$ via band structure alone. The results are plotted in Supplementary Fig. \ref{fig:3dsingleef0}. The fixed-$E_{\text{F}}$ results for single band evolution as depicted in Fig. 1a of the main text are given in Supplementary Fig. \ref{fig:3dsingleef0}a. The main takeaway from Zone 1 is the problem of insufficient dispersion for critically heavy bands. We observe that $\alpha$ starts on a plateau that corresponds to the classic SPB model behavior. When the band becomes critically heavy, however, $\alpha$ starts to decrease below the SPB value. The origin of this deviation is that the band becomes heavy enough to terminate at the BZ boundary before gaining enough energy to fully trigger $\zeta$, and misses out on some high-energy states that would otherwise contribute relatively more to $\zeta$ than $\sigma$. For example, at 500 K, carriers of up to 0.18 eV and 0.25 eV above the Fermi level make 97\% of the total contribution to respectively $\sigma$ and $\zeta$. However in our model, if $m=5$, the band encounters the BZ boundary at a value of 0.18 eV, which is enough to almost entirely trigger $\sigma$ but miss important contributions to $\zeta$. Any further increase in $m$ translates to increasingly greater relative loss for $\zeta$ than $\sigma$, continuously degrading the Seebeck coefficient. Typical SPB models overlook this problem of insufficient dispersion owing to the finite-sized BZ, breaking down for heavy enough dispersions. Of note, $\alpha$ goes to 0 as the band completely flattens out, which is explained in the later discussions on optimum bandwidth. The main lesson from Zones 2 and 3 in Supplementary Fig. \ref{fig:3dsingleef0}a, where the band evolves anisotropically, is the role of group velocity. Under DPS and to a lesser extent under POS, we observe that moderate anisotropy gives the highest $\alpha$, represented by the $\alpha$ peak in the middle of the two zones ($m_{y}, m_{z}=5$). Because $\Sigma(E)\propto v^{2}(E)$ under DPS, thermoelectric behaviors are determined entirely by the average group velocities, $\langle v_{x}^{2}(E) \rangle$. For an isotropic parabolic band, $\langle v^{2}(E) \rangle = \frac{2mE}{3}$; thus, the contribution to $\Sigma(E)$ scales linearly with $E$. For a moderately anisotropic band, $\langle v_{x}^{2}(E) \rangle$ develops a kink. That is, it abruptly steepens in slope (see supplementary Figs. \ref{fig:anisotropyprofiles}a--b). Specifically, for unidirectional anisotropy (heavy only in $z$), $\langle v_{x}^{2}(E) \rangle \propto mE$ post-kink which is the 2D parabolic velocity scaling. For bidirectional anisotropy (heavy in $y$ and $z$), $\langle v_{x}^{2}(E) \rangle \propto 2mE$ post-kink which is the even steeper 1D parabolic velocity scaling. Relative to the scaling of an isotropic band, the kinked $\langle v_{x}^{2}(E) \rangle$ profiles weight $\zeta$ more than $\sigma$ because the velocities increase more steeply at higher energies that at lower energies. This allows $\alpha$ to peak at some moderate anisotropy, and the peak is higher for bidirectional anisotropy. For extremely anisotropic bands mimicking low-dimensional bands, which are also popularly known as ``flat-and-dispersive" bands \cite{lowdimensional3d,quantumwell}, $\langle v_{x}^{2}(E) \rangle$ reverts to linear scaling but with the steeper, post-kink slopes. Another subtlety regarding extremely anisotropic bands is that they exhaust ``low-energy voids". That is to say, wherever carriers line up along the heavy direction(s), their dispersion in the light direction starts from essentially the band minimum energy. This poses a stark contrast to a less anisotropic band or an isotropic band, for which some dispersion towards the light direction may start from higher energies leaving behind a void of states at lowest energies (see Supplementary Fig. \ref{fig:anisotropyprofiles}c). Because low-energy states contribute relatively more to $\sigma$ than to $\zeta$, their absence is a clear benefit to thermoelectrics. Extremely anisotropic bands exhaust these voids, and therefore, the overall $\alpha$ is somewhat lower than in the isotropic case. POS retains some of the same $\alpha$ signatures of DPS, while under IIS $\alpha$ only decreases with anisotropy. Next, we examine the case of two bands whose results are in Supplementary Fig. \ref{fig:3dsingleef0}b. Here, we fix the first band in shape and evolve the second band mass according to Fig. 1b in the main text. The results are largely similar to the single-band results but for a prominent peak in $\alpha$ in the middle of Zone 1 (under DPS and POS) where the second band flattens out isotropically. This peak represents the second band acting as a resonance level \cite{resonancelevelreview} that performs energy-filtering due to interband scattering. Although the isotropically heavy band has negligible direct contribution to transport, it can act as a localized scattering partner for the dispersive principal band where their energies overlap, or ``resonate," thereby preferentially scattering low-energy carriers. This increases $\zeta$ relative to $\sigma$ because the low-energy states that had previously contributed much more to $\sigma$ than $\zeta$ are now selectively scattered by the narrowed second band. As a result, $\alpha$ is able to well exceed its single-dispersive-band value. As the second band completely flattens out, however, its width becomes too narrow to filter enough states and hence $\alpha$ is reduced again. We observe that IIS is not a good agent of energy-filtering. In summary, whereas $\alpha$ is constant for any band under the SPB model as long as $E_{\text{F}}$ is fixed, our revised model correctly reflects its fluctuating response to changes in a band structure, especially as it approaches extreme shapes. The optimum $E_{\text{F}}$, plotted in Fig. 2a of the main text, is entirely below the band minimum (zero). This can be understood by noting that generally $\kappa_{e}>>\kappa_{\text{lat}}$ in our model, which in turn leads to $zT\approx\frac{\alpha^{2}}{L}$ where $L$ is the Lorenz number. In the absence of bipolar effect, because $\alpha$ is higher at lower $E_{\text{F}}$ whereas $L$ is relatively constant with respect to $E_{\text{F}}$, $zT$ peaks at low $E_{\text{F}}$ (non-degenerate doping) near where $\alpha$ peaks. The optimum $E_{\text{F}}$ fluctuates with band evolution. In Zone 1, initially, optimal $E_{\text{F}}$ increases as the band turns heavier. This is because, as the band turns heavier, $\kappa_{e}$ becomes comparable to and then lower than $\kappa_{\text{lat}}$. When $\kappa_{e}<\kappa_{\text{lat}}$, higher PF is required to drive high $zT$. Because the PF is maximized with $E_{\text{F}}$ near the band minimum, optimal $E_{\text{F}}$ increases to meet it. When the band becomes critically heavy and narrow in Zone 1, optimal $E_{\text{F}}$ reverses course and moves away from the band minimum. This is because $E_{\text{F}}$ must again be placed at a distance from the band minimum in order to generate finite $\alpha$ for reasons explained in the later discussions regarding optimum bandwidth. In Zones 2 and 3, as the band turns anisotropic, $\zeta$ and $\kappa_{e}$ both increase relative to $\sigma$ due to steepening group velocity profile. Because $\kappa_{e}$ increases relative to $\kappa_{\text{lat}}$, which is fixed in our model, $zT$ is increasingly less needy of high PF, and peaks at increasingly lower $E_{\text{F}}$ near where $\alpha$ is maximized. In Fig. 3a in the main text, describing the multi-band context, the optimal $E_{\text{F}}$ behaves largely similar as it does in Fig. 2a except for the huge spike in the center of Zone 1. A spike in optimal $E_{\text{F}}$, through the band, corresponds to the case where the resonance effect is the most pronounced. Because low-energy states are heavily scattered due to the second band performing energy-filtering, $\zeta$ does not suffer bipolar reduction even if these low-energy states are placed on the opposite side of the Fermi level. Neither does $\sigma$ significantly increase. In turn, $zeta$ benefits from larger $\Sigma(E)$ of the deeper conduction states. When the second band becomes too narrow to act as a resonance level, the optimal $E_{\text{F}}$ again falls below the band minimum as in the single-band case of Fig. 2a. For a simple study of bipolar effect, we fix the band gap to 0 such that the two bands are tangent to one another at $k=0$ and $E=0$. Note that the band gap is an adjustable parameter in our model. We then fix the dispersion of the conduction band and modulate the valence band effective masses (see Fig. 1c in the main text). Two tangent bands are admittedly not how realistic metallic band structures usually are, but nonetheless, this set-up does probe the essence of how bipolar effect could be resisted for metals and tiny-gap semiconductors. We consider only DPS since the very high $n$ in metals would virtually completely screen Coulombic mechanisms that are POS and IIS. The message of Supplementary Fig. \ref{fig:3dbipolar} is straightforward: bipolar negation of the Seebeck coefficient is suppressed if one band is light and the opposing band is heavy, or more specifically if there is an asymmetry in $\Sigma(E)$ about the Fermi level. The greater the contrast, the better, though the benefit effectively saturates past a point. From the $n$-type point of view, a completely flat valence band would not carry any hole current but only function as a potential resonance level for low-energy electrons (would require inelastic processes). The desired effect in this picture is essentially a hybrid of the light-band-over-heavy-band rule and energy-filtering: keep holes heavy and filter them further out with as much resonance scattering as possible, while keeping electrons mobile and scattering-free except at very low energies. It is worth recognizing that, in the limit of completely flat valence band, one essentially has a degenerate-semiconducting state with a resonance level and a ``gap" below. This indicates that, within the conventional setting herein assumed, the ideal limit for a metallic thermoelectric is precisely the semiconducting limit with only DPS being present. The identity of the rightmost $zT$ values (under DPS) in each zone in Figs. 3a (main text) and \ref{fig:3dbipolar}a proves the point. Lastly for metals and tiny-gap semiconductors, $\kappa_{e}$ is frequently far higher than $\kappa_{\text{lat}}$, where small Lorenz number ($L=\frac{\kappa_{e}}{\sigma T}$) becomes critical. Even for typical semiconductors, once $\kappa_{\text{lat}}$ is reduced and the PF is improved, realizing small $L$ would be the final piece of the puzzle. Mirroring the way in which bipolar transport is fought, it is theoretically rather clear what must be done to achieve small $L$: filter out very high-energy states because they contribute relatively more to the thermal current than the thermoelectric or Ohmic currents. They could, in theory, be either 1) filtered with additional states that locally accommodate heavy scattering at high energies, or 2) better yet by simple absence of high-energy states. Investigation of optimal electronic structures for thermoelectrics can be traced back to the seminal work by Mahan and Sofo \cite{bestthermoelectric}. They took a purely mathematical approach to formulate $zT$ in terms of the energy integrals presented in the main text, and derived the ideal spectral conductivity for maximization of $zT$. They determined that a Dirac delta function near the Fermi level, say at $E_{\dagger}\approx E_{\text{F}}$, is the ideal functional form: \begin{equation}\label{eq:mahansofo} \Sigma(E)=D(E)v^{2}(E)\tau(E)=\Sigma_{0}\delta_{E,E^{\dagger}}, \end{equation} where $\Sigma_{0}$ is some pre-factor. $D(E)$ and $v^{2}(E)$ are directly determined by the electronic structure, while $\tau(E)$ is only indirectly related to it and heavily depends on electron scattering mechanisms. However, their approach was purely mathematical in nature with an implicit assumption that $\kappa_{\text{lat}}=0$. Supplementary Eq. \ref{eq:mahansofo} must be interpreted with caution when applied to reality. For Supplementary Eq. \ref{eq:mahansofo} to be satisfied, mathematically, at least one of $D(E)$, $v^{2}(E)$, or $\tau(E)$ must be $\delta_{E,E^{\dagger}}$. Physically, however, the terms cannot be independently reduced to a delta function. Firstly, $v^2(E)$ cannot be $\delta_{E,E^{\dagger}}$ while $D(E)$ is not, because $v^2(E)$ is categorically zero without some band dispersion around $E_{\text{F}}$, while no band dispersion can arise at all if $D(E)$ has no width. Secondly, provided there is some dispersion, $\tau(E)$ cannot be a delta function unless electrons are perfectly scattered everywhere but at single $E_{\text{F}}$, which is next to impossible. Then the only plausible way in which $\tau(E)$ can be a delta function is if $D(E)$ is also. These considerations indicate that the only way for Supplementary Eq. \ref{eq:mahansofo} to hold is for $D(E)=N_{v}\delta_{E,E^{\dagger}}$, reflecting one or more ($N_{v}$) perfectly localized states, or perfectly flat bands. The factor $N_{v}$ arises from the fact that eDOS of each band must integrate to 1 (or 2 if spin-degenerate) to conserve the number of electrons: \begin{equation}\label{eq:edosintegral} N_{v}=\sum_{1}^{N_{v}}\int_{-\infty}^{\infty} \delta_{E,E^{\dagger}}dE. \end{equation} This in turn forces $\tau(E)=\tau_{0}\delta_{E,E^{\dagger}}$ for some finite $\tau_{0}$ limited by elastic scattering, but more importantly it forces $v_{0}\rightarrow0$ and therefore by default $\Sigma(E)\rightarrow0$. The implications of $\Sigma(E)\rightarrow0$ are as follows. First, the conductivity is immediately 0, as has also been pointed out by a previous study \cite{optimalbandwidth}, \begin{equation}\label{eq:sigma0} \sigma=\int_{-\infty}^{\infty} \Sigma(E)\left(-\frac{\partial f}{\partial E}\right)dE=0. \end{equation} Second, the Seebeck coefficient is expressible as the following limit as $\Sigma(E)\rightarrow0$ point-by-point: \begin{equation}\label{eq:mahansofoseebeck1} \begin{aligned} \alpha=\lim_{\Sigma(E)\rightarrow0}\frac{\frac{1}{T}\int_{-\infty}^{\infty} \Sigma(E)(E_{\text{F}}-E)\left(-\frac{\partial f}{\partial E}\right)dE}{\int_{-\infty}^{\infty} \Sigma(E)\left(-\frac{\partial f}{\partial E}\right)dE}. \end{aligned} \end{equation} Generally speaking, this is a non-trivial limit to evaluate because $\Sigma(E)$ is a function, not a scalar. However, with the knowledge that $\Sigma(E)$ is widthless (due to $D(E)$), and so it would approach zero at a single point, we can reformulate the limit as \begin{equation}\label{eq:vsquared} \lim_{\Sigma(E)\rightarrow0}\Sigma(E)=\lim_{n\rightarrow\infty}\frac{1}{n}\Sigma_{0}\delta_{E,E^{\dagger}}, \end{equation} where $n\ge1$ is an integer, and evaluate Supplementary Eq. \ref{eq:mahansofoseebeck1} as \begin{equation}\label{eq:mahansofoseebeck2} \begin{aligned} \alpha&=\lim_{n\rightarrow\infty}\frac{\frac{1}{n}\frac{1}{T}\int_{-\infty}^{\infty} \Sigma_{0}\delta_{E,E^{\dagger}}(E_{\text{F}}-E)\left(-\frac{\partial f}{\partial E}\right)dE}{\frac{1}{n}\int_{-\infty}^{\infty} \Sigma_{0}\delta_{E,E^{\dagger}}\left(-\frac{\partial f}{\partial E}\right)dE} \\ &=\frac{1}{T}(E_{\text{F}}-E^{\dagger}), \end{aligned} \end{equation} and this behavior is graphically verified in Fig. 2a and Fig. 3c in the main text. In Fig. 2a, where $E^{\dagger}$ tends to $E_{\text{F}}$ at the band minimum as the band narrows, $\alpha$ tends to 0. In Fig. 3c, where $E^{\text{F}}$ is away from the band minimum that $E^{\dagger}$ tends to, $\alpha$ tends to some finite value corresponding to $(E_{\text{F}}-E^{\dagger})$. Third, by the same token as above, the Lorenz number can be shown to tend to 0 for a widthless band: \begin{widetext} \begin{equation}\label{eq:mahansofolorenz} \begin{aligned} L&=\lim_{n\rightarrow\infty}\frac{\frac{1}{n}\left[\frac{1}{T}\int_{-\infty}^{\infty} \Sigma_{0}\delta_{E,E^{\dagger}}(E_{\text{F}}-E)^{2}\left(-\frac{\partial f}{\partial E}\right)dE-T\alpha^{2}\int_{-\infty}^{\infty} \Sigma_{0}\delta_{E,E^{\dagger}}\left(-\frac{\partial f}{\partial E}\right)dE\right]}{\frac{1}{n}\int_{-\infty}^{\infty} \Sigma_{0}\delta_{E,E^{\dagger}}\left(-\frac{\partial f}{\partial E}\right)dE} \\ &=\frac{1}{T}(E_{\text{F}}-E^{\dagger})^{2}-\frac{1}{T}(E_{\text{F}}-E^{\dagger})^{2}=0, \end{aligned} \end{equation} \end{widetext} and this behavior is graphically suggested in Supplementary Fig. \ref{fig:3dsingleoptsupp}e. By Eqs. \ref{eq:mahansofoseebeck2} and \ref{eq:mahansofolorenz}, given some $E^{\dagger}$, a perfectly localized band of widthless $D(E)$ would lead to divergence in ``electronic-part $zT$," or $zT$ without $\kappa_{\text{lat}}$: \begin{equation}\label{eq:electronicpartzt} z_{e}T = \frac{\alpha^{2}\sigma}{\kappa_{e}}T = \frac{\alpha^{2}}{L}=\infty. \end{equation} This behavior, consistent with the conclusions of the Mahan-Sofo theory, is graphically suggested in Supplementary Fig. \ref{fig:3dsingleoptsupp}f. However, because of finite Seebeck and vanishing conductivity, the PF would vanish, and compounded by $\kappa_{\text{lat}}>0$ in real materials, $zT$ would vanish to 0 alike. Ergo, even if a set of perfectly localized states could exist in real materials, it is not the physically ideal structure for $zT$ or the PF in real materials. The fundamental barrier is, again, that the components of $\Sigma(E)$ cannot be independently widthless. The value of $z_{e}T$ as a metric for thermoelectric performance improves only as $v(E)$ becomes finite and large and as $\kappa_{\text{lat}}$ is kept minimal. In supplement to the optimum bandwidth of a parabolic band, we also consider an isotropic quartic band, $E=c(k_{x}^{2}+k_{y}^{2}+k_{z}^{2})^{2}$. The quartic dispersion coefficient $c$ is selected such that it gives the quartic band the same energy as the parabolic band at the inflection point. Supplementary Fig. \ref{fig:bandwidthef0} shows the results. Between parabolic and quartic dispersions, we see that the latter performs better by about 20\%. Since $\tau\propto D^{-1}(E)$ under DPS, it is again the average group velocities distribution that is responsible for the better quartic performance in the transport direction: $\langle v_{x}^{2}(E) \rangle=\frac{16\sqrt{c}}{3}E^{1.5}$. This obviously grows faster with $E$ than that for the parabolic $\langle v_{x}^{2}(E) \rangle=\frac{2}{3m}E$, thereby weighting $\zeta$ relatively more than $\sigma$. \section{Supplementary Methods} The deformation-potential scattering was initially developed for long-wavelength acoustic phonon scattering \cite{defpotbardeenshockley}. Even with the Kahn-Allen correction \cite{defpotkahnallen} that we introduce, the overall DPS rate follows the DOS for the most part. However, as mentioned in the main text, the $\tau^{-1}_{\text{DPS}}\propto D(E)$ model works very well in practice for very anisotropic bands in the presence of zone-boundary phonon scattering as well as interband scattering. This was verified by accurate first-principles calculation of electron-phonon scattering using the EPW software \cite{epw1,epw3} for materials such as Fe$_{2}$TiSi \cite{ba2biau} and Li$_{2}$TlBi \cite{analoguepbte}. Fe$_{2}$TiSi conduction bands are flat-and-dispersive and dominated by DPS \cite{ba2biau}. The scattering rate for the conduction bands closely follow DOS, which validates the $\tau^{-1}_{\text{DPS}}\propto D(E)$ model. The valence bands are affected significantly by POS, and their scattering rates deviate from DOS. See Supplementary Fig. \ref{fig:scatteringsupp}. Li$_{2}$TlBi valence bands are flat-and-dispersive and dominated by DPS \cite{analoguepbte}. The scattering rate for the valence bands closely follow DOS, which again validates the $\tau^{-1}_{\text{DPS}}\propto D(E)$ model.. Therefore, we expect the extension of Eqs. 10 and 11 in the main text to work well for our model band structures. For a parabolic band, there exists an analytic formula for energy-dependent lifetime due to inelastic POS \cite{lundstrom,nolassharpgoldsmid}, \begin{widetext} \begin{equation}\label{eq:postau} \tau_{\text{POS}}(E)=\frac{E^{\frac{1}{2}}}{\sqrt{2}\omega_{o}m^{\frac{1}{2}}} \left(\frac{1}{\epsilon_{\infty}}-\frac{1}{\epsilon}\right)^{-1} \left[(b(\omega_{o})+1)\cdot\text{sinh}^{-1}\left(\sqrt{\frac{E}{\omega_{o}}-1}\right) + b(\omega_{o})\cdot\text{sinh}^{-1}\left(\sqrt{\frac{E}{\omega_{o}}}\right)\right]^{-1}. \end{equation} \end{widetext} The first (second) term in the square brackets represents emission (absorption). However, derivation of Supplementary Eq. \ref{eq:postau} uses an energy-momentum phase-space integral simplified with a parabolic dispersion relation, rendering its direct application to non-parabolic bands unjustified. The hyperbolic arcsin terms account for the availability of DOS ($\sim\sqrt{E}$ for a parabolic band) for carriers to be scattered into (final states). Our correction in the main text makes partial corrections by utilizing custom-calculated DOS and \textbf{k}-dependent forms of Supplementary Eq. \ref{eq:postau}. The established energy-dependent formalism for IIS of a parabolic band is the Brooks-Herring formula \cite{brooks,ionizedimpurity}, \begin{equation}\label{eq:brooksherring} \tau_{\text{IIS}}(E)=\frac{\sqrt{2}\epsilon^{2}m^{\frac{1}{2}}E^{\frac{3}{2}}}{\pi N_{i}Z^{2}}\left(\text{log}(1+\gamma(E))-\frac{\gamma(E)}{1+\gamma(E)}\right)^{-1}, \end{equation} where $\gamma$ is a screening term defined as, \begin{equation}\label{eq:gamma} \gamma(E)=\frac{8mE\epsilon k_{\text{B}}T}{n}\left(\frac{F_{\frac{1}{2}}(E_{\text{F}})}{F_{-\frac{1}{2}}(E_{\text{F}})}\right). \end{equation} and $F_{z}(E_{\text{F}})$ is the Fermi-Dirac integral \begin{equation}\label{eq:fdintegral} F_{z}(E_{\text{F}})=\frac{1}{\Gamma(z+1)} \int_{0}^{\infty} \frac{y^{z}}{1+\text{exp}(y-\frac{E_{\text{F}}}{k_{\text{B}}T})}dy. \end{equation} Its derivation involves \textbf{k}-space integration over spherical isoenergy surfaces of a parabolic band, rendering it also non-trivial to extend to non-parabolic bands. Our correction in the main text makes partial corrections by utilizing custom-calculated DOS and \textbf{k}-dependent forms of Supplementary Eqs. \ref{eq:brooksherring}--\ref{eq:gamma}. \section{Supplementary References}
proofpile-arXiv_059-15763
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{The Problem} Applications give an experiential realization to what is theoretically possible and illuminate computational issues that the theory may not have foreseen. In the following exposition, we describe a problem that was encountered in one mathematical area, that was resolved with the tools of another, but not without roadblocks. This story not only arrives at a nice conclusion (the problem could be solved), but along the way uncovers the subtleties of the theory that may not be immediately apparent to a new user. Who knows, these gems might also find itself to be useful in other contexts. It is with this goal in mind that this manuscript exists, and we hope that the reader will bear with us as we explicate the details and hint at where some improvements can be made. Recently, Wiart and Wong~\cite{WiartWong20} derived a formula for the covariance of an integral estimator for functions satisfying a certain decay condition, based on a quasi-Monte Carlo framework developed by Wiart, Lemieux, and Dong~\cite{WiartLemieux19}. More specifically, the latter introduced some randomness into selected point sets to improve the uniform distribution of points, giving access to certain probabilistic error estimates. The former then proposed some conditions for functions in terms of their Walsh coefficients and they were able to deduce a simplified formula for the covariance, which measure the effectiveness of a randomized quasi-Monte Carlo (RQMC) estimator of the integral of the function over the unit hypercube using the randomized point set. The conclusion in the paper~\cite{WiartWong20} is that under the proposed conditions, the covariance can be shown to be not positive and therefore, the RQMC estimator performs better than the standard Monte Carlo estimator. This covariance formula is written as the following polynomial in~$x$, \begin{equation} \label{eq:mainpoly} G_s(x):=\sum_{k=1}^{m+s-1}\Bigg(\sum_{r=1}^{s}\binom{s}{r}\binom{k-1}{r-1}\frac{b-1}{(-b)^r}\sum_{i=0}^{r-1-c_m(k)}(-b)^i\binom{r-1}{i}\Bigg)(bx)^k, \end{equation} where~$c_m(k)=\max(k-m,0).$ The goal in~\cite{WiartWong20} is to show that $G_s(x)\leq 0$ for all $b, m, s \in \mathbb {N}$, $b\geq 2$ and $x\in[0,1)$. Much of the QMC literature focuses on optimizing point sets to achieve desired distribution properties and using techniques in analysis to improve bounds on estimators (see \cite{DickHinrichsPillichshammer15} and \cite{DickPillichshammer10} for an overview). Unfortunately, we found that such techniques do not enable us to reach the desired conclusion. Accordingly, we choose to approach this problem by employing symbolic computation rather than analysis: using the available computer algebra software for holonomic functions~\cite{Kauers09,Koutschan10,Schneider07}, we carry out a guess-and-prove strategy that ultimately leads us to deduce a suitable closed form for~\eqref{eq:mainpoly}. The result is an expression in terms of regularized beta functions which allows us to show the desired nonpositivity statement. This article serves to highlight the ``proving'' aspect of the strategy, i.e., the derivation and proof of a third-order recurrence that~\eqref{eq:mainpoly} satisfies. In computer algebra, such computations are typically handled by the method of \textit{creative telescoping} \cite{Zeilberger90}. Implementations already exist that serve this purpose \cite{Chyzak97,KauersJaroschekJohansson15,Koutschan09,Schneider07}, with the caveat that there are many strategies that could be used in combination to make the computation more effective, and in some cases resolve issues that the implementations don't do automatically. Thus, we implore the reader to not immediately jump into the first strategy that is presented here, but rather to think of it as a buffet, some of these might work really well for one problem, and not as much for another. We want to make it clear that a rigorous derivation and motivation for \eqref{eq:mainpoly} can already be found in \cite{WiartWong20} and we will not repeat what has already been written. Instead, our goals here are intended for two types of researchers: \begin{itemize} \item those who may want see the types of objects that are amenable to state-of-the-art symbolic computation techniques; \item those who may be interested in automating symbolic computation software and are interested in seeing the grisly details and roadblocks that appear with the use of current methods. \end{itemize} \section{Background}\label{sec:background} On a first glance, we note that all constituents of~\eqref{eq:mainpoly} have the property of being ``holonomic''. For the purposes of this paper, we stay slightly informal (but still rigorously practical) and use the definition that holonomic functions are sequences which satisfy ``sufficiently many'' linear recurrences with polynomial coefficients. One convenience of dealing with such functions comes from the fact that holonomicity is preserved through basic operations: we will refer to these as ``closure properties''. For example, the product of two holonomic functions is again holonomic~\cite[Proposition 3.2]{Zeilberger90}. From a computational point of view, if we know the recurrences that the two holonomic functions satisfy, we can construct a recurrence that their product satisfies and even bound its order and the degrees of its polynomial coefficients. Our expression~\eqref{eq:mainpoly} consists of summation quantifiers, (products of) binomial coefficients, and polynomial/exponential functions in the parameters. The binomial coefficient, for example, can be described completely via two recurrences and finitely many initial conditions. From our usual notion of binomial coefficients, we can immediately write down the two recurrences (valid on $\mathbb{Z}\times\mathbb{Z}$): \begin{align}\label{eq:recbinom} \begin{split} (n-k+1)\binom{n+1}{k}-(n+1)\binom{n}{k}=0,\\ (k+1)\binom{n}{k+1}-(n-k)\binom{n}{k}=0, \end{split} \end{align} and specify the initial conditions \[ \binom{0}{0}=1,\quad \binom{-1}{0}=0,\quad \binom{-1}{-1}=0. \] By doing so, we interpret the binomial coefficient over the integers in the traditional (combinatorial) way, namely, it is nonzero only for $0\leq k\leq n$. We explicitly highlight this because the computer algebra system Mathematica uses a more general notion of the binomial coefficient, which extends its definition to the negative integers. It is important to note that this generalized binomial coefficient is defined by the very same recurrences, but just uses different initial conditions: \[ \binom{0}{0}=1,\quad \binom{-1}{0}=1,\quad \binom{-1}{-1}=1. \] Unfortunately, the more general view of the binomial coefficient affects the natural boundaries of our summation limits: the summands containing these coefficients behave inappropriately outside of our prescribed bounds, which is of course irrelevant regarding the definition of~$G_s(x)$, but which may cause problems when evaluating boundary terms originating from the telescoping sums. We will have to address this issue later. In the following, we will use this simple bivariate sequence $\binom{n}{k}$ to illustrate the main features of the holonomic systems approach. Using the notation $S_n$ resp.\ $S_k$ to represent the forward shift operator in the given variable, we can rewrite \eqref{eq:recbinom} so that each of the corresponding operators \begin{align}\label{eq:bcann} \begin{split} (n-k+1)S_n-(n+1),\\ (k+1)S_k-(n-k), \end{split} \end{align} maps $\binom{n}{k}$ to the zero sequence. We say that these operators annihilate the given function. As one can see, the translation between recurrence and operator can be read off immediately. Viewing recurrences as operators enables the use of algebraic methods to manipulate them more efficiently. However, we then have to deal with objects that do not always commute. The appropriate algebraic framework to represent such operators is an Ore algebra. In the following technical definition, $\partial$ serves as a placeholder for any of our operator symbols~$S_n$ or~$S_k$. \begin{defn}\label{def:orealgebra} Let $R$ be a ring. \begin{enumerate} \item If $\sigma\colon R\to R$ is a ring endomorphism and $\delta\colon R\to R$ is such that addition and the ``skew'' Leibniz law is satisfied, that is, \[\delta(f+g)=\delta(f)+\delta(g),\] \[\delta(fg)=\delta(f)g+\sigma(f)\delta(g),\] for all $f,g\in R$, then $\delta$ is called a \emph{$\sigma$-derivation.} \item Suppose now that there is an endomorphism $\sigma\colon R\to R$ and a $\sigma$-derivation $\delta\colon R\to R$. Suppose further that a ring structure is defined on the set $R[\partial]$ of univariate polynomials in~$\partial$ with coefficients in~$R$, equipped with the usual addition, and multiplication is such that \[ \partial^i \partial^j=\partial^{i+j} \quad\text{and}\quad \partial f=\sigma(f)\partial+\delta(f) \qquad\text{ for all } i,j\in\mathbb {N} \text{ and } f\in R. \] Then $R[\partial]$ is an \emph{Ore algebra} over~$R$. We typically use the symbol~$\mathbb{O}$ to denote such algebras. \item Suppose that $f$ is in the left-$R[\partial]$-module $R$ with action $\cdot~\colon~\mathbb{O}\times R\rightarrow R$ such that $1\cdot f=f$ and $L_1\cdot(L_2\cdot f)=(L_1L_2)\cdot f$ for all $L_1,L_2\in \mathbb{O}$. Then we say \[ \operatorname{ann}(f)=\{ L\in R[\partial] \mid L\cdot f = 0\} \] is the \emph{annihilator} of~$f$ in~$R[\partial]$. \end{enumerate} \end{defn} For the binomial coefficient $\binom{n}{k}$, the Ore algebra that we use is $\mathbb{O}=R[S_k,S_n]$ with $R=\mathbb{Q}(n,k)$. A quick check shows that shift operators satisfy the required commutation properties. In the definition of this Ore algebra, each $\sigma$ denotes a forward shift operation (clearly a ring endomorphism) and $\delta \equiv 0$ (clearly a $\sigma$-derivation). The reader may have wondered why we did not consider Pascal's rule as a potential defining recurrence for the binomial coefficient. The reason is that~\eqref{eq:recbinom} are quite canonical generators for the set of all recurrences satisfied by~$\binom{n}{k}$. In algebraic terms, we can formulate this statement precisely: the annihilator of~$\binom{n}{k}$, which is a left ideal in~$\mathbb{O}$, is generated by the operators~\eqref{eq:bcann}, let's call them $P_1,P_2\in\mathbb{O}$: \[ \operatorname{ann}\binom{n}{k} = \{ C_1\cdot P_1 + C_2\cdot P_2 \;\mid\; C_1,C_2 \in\mathbb{O} \}. \] Moreover, the two operators in \eqref{eq:bcann} even form a (left) Gr\"obner basis of $\operatorname{ann}\binom{n}{k}$. The key tool used in rigorously deriving a ``grand'' recurrence for $G_s(x)$ lies in the highly touted creative telescoping algorithm~\cite{Zeilberger91} for symbolic sums and integrals, as implemented in the HolonomicFunctions package~\cite{Koutschan10}. In order to construct a recurrence for a symbolic parametric sum of the form \begin{equation}\label{eq:symbsum} \sum_k \mbox{summand}, \end{equation} the algorithm takes as input a list of generators, like the ones in \eqref{eq:bcann}, for an annihilating ideal of the summand. If the summand is given as a closed-form expression, then such a list is automatically computed, provided that it is recognized to be holonomic. The algorithm then identifies lists of operators $\mathcal{P}$ and $\mathcal{Q}$ (in the form of Ore polynomials in the algebra as described in Definition~\ref{def:orealgebra}) such that for each $P\in\mathcal{P}$ and its corresponding $Q\in\mathcal{Q}$, the operator $P-(S_k-1)\cdot Q$ is an element of the given annihilating ideal. The set $\mathcal{P}$ contains the so-called ``telescopers'' (all of which are free of $k$ and $S_k$ but may contain the other parameters), and the set $\mathcal{Q}$ the corresponding ``certificates''. How do these objects help us? Summing with respect to~$k$ gives relations of the form \begin{equation}\label{eq:ct} \sum_k P\cdot \mbox{summand}-\sum_k(S_k-1)\cdot Q \cdot \mbox{summand} = 0. \end{equation} In a best-case scenario, each of the $P$'s commutes with the first summation in~\eqref{eq:ct} (allowing us to pull it out of the sum so that we can view the elements of $\mathcal{P}$ being applied to the whole sum and not just the summand) and the second summation collapses to zero by telescoping (leaving no trace of the certificate). From there, we would conclude that $\mathcal{P}$ generates a left ideal of annihilating operators for~\eqref{eq:symbsum}, that is, it represents a set of recurrences which are satisfied by the sum. Coming back to our triple sum~\eqref{eq:mainpoly}, we can repeatedly apply this process until a recurrence for the outermost (and hence the whole) sum is deduced. However, life is not always that easy: during the application of this strategy to the particular summation problem~\eqref{eq:mainpoly}, we encountered the following difficulties that are somewhat prototypical for the holonomic systems approach. This explains why, despite being automatable in principle, it still lacks a press-the-button implementation that would provide a computer proof of a claimed identity in a completely automatic way and without any human interaction. \begin{enumerate}[itemsep=2pt,parsep=2pt] \item The summand, i.e., the expression inside a summation quantifier, may take on nonzero values outside of the respective summation bounds. Thus, there is no reason to expect a~priori that the second summation in~\eqref{eq:ct} will evaluate to zero. And indeed, we found that it did not, and such terms constitute some of the ``inhomogeneous parts'' of the equation. An additional annihilator for them is required in order to homogenize the recurrence. \item The upper boundaries contain the variable~$s$, and the operators in~$\mathcal{P}$ contain shifts in~$s$, causing difficulties with moving~$P\in\mathcal{P}$ to be outside of the sum. \item Some of the operators in~$\mathcal{Q}$ contain singularities at the boundary values so we were forced to exclude these values (which required compensation elsewhere). This is because the sum in~\eqref{eq:ct} containing the certificates is designed to collapse to only boundary value evaluations and we encounter problems if the summands are undefined at such values. Further issues could surface if those summands were also undefined at some intermediary value. Luckily, this was not the case here. \item Mathematica, in its symbolic zeal, rewrites the innermost sum in~\eqref{eq:mainpoly} as a hypergeometric $_2F_1$ series and the second innermost sum as a DifferenceRoot. While the values of the $_2F_1$ function match with our sum within the domain in question, there are still an infinite number of values for which it doesn't. The DifferenceRoot is Mathematica's version of a recurrence together with initial values, but unfortunately not helpful for our purposes because it is incompatible with HolonomicFunctions and does not support multivariate recurrences that are needed for creative telescoping. \end{enumerate} We illustrate the first three difficulties with a toy example. For thoroughness, we apply our strategy fully to this example to give a sufficient idea of the broader behavior. From this point on, we will periodically perform the service of demonstrating how computers and humans interact, by highlighting (in brackets) when paper-and-pencil reasoning is used and when automation is applied. Suppose we want to rigorously determine an annihilating operator for \begin{equation}\label{eq:binomialsum} \sum\limits_{k=5}^{n}\binom{n}{k} \end{equation} for $n\geq 5$. In other words, we would like to identify a recurrence that it satisfies. The creative telescoping algorithm (computer) outputs the telescoper $P=S_n-2$ and the certificate $Q=\frac{k}{k-n-1}$. Then \eqref{eq:ct} implies \[ \sum\limits_{k=5}^{n}(S_n-2)\binom{n}{k}-\underbrace{\left(\frac{k}{k-n-1}\binom{n}{k}\Biggr|_{k=n+1}-\frac{k}{k-n-1}\binom{n}{k}\Biggr|_{k=5}\right)}_{\text{collapsed sum with singularity at } k=n+1}=0, \] with the summation containing the certificate collapsing to only evaluations at the boundary values. We note that $\binom{n}{k}$ is nonzero for $k=0,\ldots,4$, i.e., outside of the summation bounds. After substituting $k=5$ we get a nonzero contribution in the certificate (compare this to the situation where the lower bound is $\leq 0$). We next note that the certificate has a singularity at $k=n+1$ and this prevents the left expression in the large brackets from being evaluated. We can therefore choose to sum up to $n-1$ instead (then the boundary evaluation will occur at $k=n$ rather than $k=n+1$). With this,~\eqref{eq:ct} turns into \begin{equation}\label{eq:ctexample} \sum\limits_{k=5}^{n-1}(S_n-2)\binom{n}{k}-\underbrace{\left(\frac{n^4 - 6n^3 + 11n^2 - 30n}{24}\right)}_{\text{inhomogeneous part}}=0. \end{equation} This fixes the issue of the singularity (alternatively, we could have rewritten $\frac{k}{k-n-1}\binom{n}{k}=\frac{-k}{n+1}\binom{n+1}{k}$ to get rid of the pole). Next, we note that the upper summation bound contains the parameter $n$ while our telescoper $P$ contains the shift operator $S_n$: applying the operator to the whole sum affects both the upper bound and $\binom{n}{k}$. This is fixed with the (human) observation that \[ \sum\limits_{k=5}^{n-1}(S_n-2)\binom{n}{k}=(S_n-2)\sum\limits_{k=5}^{n}\binom{n}{k}\underbrace{-\binom{n+1}{n}-\binom{n+1}{n+1}+2\binom{n}{n}}_{\text{compensated terms $=-n$}}. \] In this situation, we say that the operator and the summation does not commute. Then~\eqref{eq:ctexample} simplifies to the inhomogeneous recurrence \[ (S_n-2)\sum\limits_{k=5}^{n}\binom{n}{k}=\frac{n^4-6n^3+11n^2-6n}{24}. \] If one prefers a homogeneous recurrence, an annihilating operator for the right-hand side can be determined to be $(n-3)S_n-(n+1)$. We can therefore conclude that \[ \big((n-3)S_n-(n+1)\big)\cdot(S_n-2)=(n-3)S_n^2+(5-3n)S_n+2(n+1) \] is the desired annihilating operator for~\eqref{eq:binomialsum}. As is expected in such simple examples, the recurrence can be solved (by the computer) to obtain a closed form for~\eqref{eq:binomialsum}. Of course, it agrees with the one that one directly gets from invoking the binomial theorem. In the above example, we can see that there is a ``dance'' between human and the computer and only upon a careful collaboration does it bear fruit. We now proceed to use a similar strategy to attack the big sum $G_s(x)$ and furthermore present some alternatives to improve performance. The total computation time largely depends on how complicated the summands and inhomogeneous parts turn out to be after (human) simplification. The next section outlines some of these strategies and in particular highlights how we were able to successfully derive (and prove) a recurrence for $G_s(x)$. \section{A Playbook for the Holonomic Approach} This section illustrates how to generally overcome the difficulties listed in the previous section and how to effectively perform the human-computer dance to prove our main result. We envision that the discussion leads to a deeper understanding of the practical issues when applying the holonomic systems approach and makes it accessible for other applications. The Mathematica notebook containing implementations of these strategies can be found in the online supplementary material \cite{KoutschanWong20}. \begin{thm}\label{thm:main} For $b,m,s\in\mathbb {N}, b\geq 2$, the polynomial given in \eqref{eq:mainpoly}, \[ G_s(x):=\sum_{k=1}^{m+s-1}\Bigg(\sum_{r=1}^{s}\binom{s}{r}\binom{k-1}{r-1}\frac{b-1}{(-b)^r}\sum_{i=0}^{r-1-c_m(k)}(-b)^i\binom{r-1}{i}\Bigg)(bx)^k, \] with~$c_m(k):=\max(k-m,0)$, satisfies the recurrence \begin{align*} &(s+2)(b x-1)\cdot G_{\!s+3}\\ &+\left(m(bx-1)(x-1)+bsx(x-2)+bx(x-3)-s(2x-3)-3 x+5\right)\cdot G_{\!s+2}\\ &-(x-1)(b m x+b s x+b x+m x-2 m+s x-3 s+x-4)\cdot G_{\!s+1}\\ &+(x-1)^2 (m+s+1)\cdot G_{\!s} = 0. \end{align*} \end{thm} We note again that this result is already contained in \cite[Lemma 15]{WiartWong20} with computational details found in~\cite{KoutschanWong20}. We also remark that a recurrence in~$m$ could also be derived, but for the application in question the above recurrence in~$s$ was sufficient. The following discussion serves to outline alternate (and in some cases, faster) proof strategies, to provide some exposition for technical details that were not mentioned in \cite{WiartWong20}, and to explicitly resolve some of the issues mentioned in the previous section in as much generality as possible under the context of using our problem as a case study. We hope that this will be useful for future practitioners. \subsection{Preprocessing the triple sum $G_s(x)$} Before we dive in, we make a few remarks about how to view $G_s(x)$ to make our life easier. On the one hand, the summation~\eqref{eq:mainpoly} can be separated into two parts $G_s=G_s^{(1)}+G_s^{(2)}$, in order to remove the max function in the upper limit of the innermost sum. After a mild simplification (human), these two parts look as follows: \begin{align*} G_s^{(1)} & :=-\!\!\!\sum_{k=1}^{m+s-1}\sum_{r=1}^s\binom{s}{r}\binom{k-1}{r-1}\left(\frac{b-1}{b}\right)^r(bx)^k,\\ G_s^{(2)} & :=\sum_{k=m+1}^{m+s-1}\,\sum_{r=1}^{s}\binom{s}{r}\binom{k-1}{r-1}\frac{1-b}{(-b)^r}\sum_{i=r-(k-m)}^{r-1}(-b)^i\binom{r-1}{i}(bx)^k. \end{align*} Observe that $-G_s^{(2)}$ is the collection of terms that is added to $G_s$ to enable the sum to collapse to $G_s^{(1)}$. We write this out to show that initially applying the full strategy to ``only'' the double sum $G_s^{(1)}$ gives a hypothetical lower bound for the time and effort required to treat the whole sum $G_s$. We note that if $k>m+s-1$, then there is no reason to expect that either of the inner sums would be zero, which may cause the inhomogeneous parts in~\eqref{eq:ct} to survive. The split sums also serve as an example of how to apply closure properties: the sum of holonomic functions is still holonomic \cite[Proposition 3.1]{Zeilberger90}, so \[ \operatorname{ann}\left(G_s^{(1)}+G_s^{(2)}\right) \] can be deduced by executing (computer) the corresponding ``closure property of addition'' algorithm after separately computing a respective annihilating ideal for each of the two terms. This closure property can also be applied in intermediate computations (for example, during the treatment of the inhomogeneous parts). However, the user should be aware that there is a risk that the recurrence order (more precisely: the holonomic rank) may increase during each such application (but not more than the sum of their orders). We learned that what we initially thought was a clever idea of splitting the sums, turned out to be less than optimal in terms of computational resources for this particular problem, in ways that will be described in the next few sections. On the other hand, we can also choose to deal with the full triple sum right from the start. By observing that when $k-m<0,$ we have the situation where $r-1<i\leq r-1-(k-m)$ forces the innermost binomial coefficient $\binom{r-1}{i}$ to be 0. Thus, the max function in $G_s(x)$ can be safely removed. We can also move all summations to the front and consider only one summand with three indexed parameters. In other words, $G_s(x)$ can be rewritten as \begin{equation}\label{eq:triplesum} \underbrace{\sum_{k=1}^{m+s-1}\sum_{r=1}^{s}\sum_{i=0}^{r-1-(k-m)}}_{\text{quantifiers grouped}} \underbrace{\vphantom{\sum_{k=1}^{m-1}} \binom{s}{r}\binom{k-1}{r-1}\binom{r-1}{i}\frac{b-1}{(-b)^{r-i}}(bx)^k}_{\text{one \vphantom{gqp}summand}}. \end{equation} In general, the creative telescoping algorithm works very well on these kinds of symbolic sums (in all forms as described above), and can be applied directly without adjustments if all conditions are ``ideal''. In such cases, the outputted telescoper corresponds exactly to the desired recurrence. Unfortunately, this kind of naive ``hey let's give it a try'' multiple/parallel application of creative telescoping to both the split sum case and to \eqref{eq:triplesum} resulted in some incorrect first-order recurrence (which was easily debunked by plugging in a few values). Hence, (human) adjustment was needed. For the split sum case, these adjustments produced extraneous terms that were not a part of the original sum and/or came from compensations due to the rebuilding of the original sum. We subsequently collected all such terms to find a collective annihilator for them for the purpose of ``homogenizing'' the recurrence given by the telescoper. All of these techniques are described in Sections \ref{sec:sing}--\ref{sec:sub}. For \eqref{eq:triplesum}, we used different approaches, and these are outlined in Sections~\ref{sec:gamma} and~\ref{sec:gf}. \subsection{Guessing} Another ``preparation'' step involves employing the Guess package~\cite{Kauers09} to predict the recurrence that our polynomial satisfies, by using sufficiently generic evaluations of~\eqref{eq:triplesum}, that is, we evaluate our polynomial for a finite number of values of our main variables $x,b,m,s$ and use the resulting data to reconstruct the coefficients of the recurrence with the command \textsc{GuessMultRE}. We furthermore impose what we believe is the general shape of the recurrence and if such a recurrence exists, the guessing procedure will produce one. This serves as an additional sanity check for future calculations: if the creative telescoping algorithm produced a recurrence of higher order, then we would know that it overshoots and we could try to find a different approach that would produce a better result. For our problem, the (computer) guessing procedure already produced the claimed minimal third-order recurrence (from Theorem \ref{thm:main}) in the parameter~$s$. This means that we know $G_s$ satisfies the recurrence for at least a finite number of values of~$s$. To prove that the guess is correct (i.e., that it satisfies the recurrence for an infinite number of~$s$), it is enough to compute the same recurrence (or a higher order one) via creative telescoping. In the latter case, one also has to verify that the guessed recurrence (operator) is a right factor of the bigger one, and consider a sufficient number of initial values. \subsection{Dealing with Singularities}\label{sec:sing} We found that some certificates~$Q$ contain singularities on the boundary values of the inner sum. This implies that the limits of the sum must be adjusted so that we avoid evaluating at those points. In fact, the summation range must be adjusted so that there are no singularities at \textbf{all} intermediate values. To illustrate this a little more generally, suppose that, upon applying the creative telescoping algorithm on the summation $\sum_{r=1}^s F(s,r)$ with respect to~$r$, the computer outputs a certificate $Q \in \mathbb{Q}(s,r)$ containing poles at $r_i\in~[1,s+1]\subset\mathbb{N}$ for a finite number of $i$. All parameters besides $r$ are treated symbolically. Then we can see that the sum $\sum_{r=1}^s (S_r-1)\cdot Q \cdot F(s,r)$ cannot be determined because evaluations at those poles for the summation range $[1,s]$ are not possible and therefore the sum is undefined. Such singularities can be removed from the offending sum so that the evaluation(s) can happen. We also remove the exact same values from the summations containing the telescopers to match so that \eqref{eq:ct} makes sense. The summations with the telescopers can then be subsequently ``filled in'' with the removed summands (balanced of course by subtracting those same terms from the inhomogeneous part). This strategy can be quite effective if all of the poles are collected contiguously at either of the summation bounds, or if there are only one or two. In the case of the inner sum of $G^{(1)}_s$, \[\sum\limits_{r=1}^s\underbrace{\binom{s}{r}\binom{k-1}{r-1}\left(\frac{b-1}{b}\right)^r(bx)^k}_{\text{summand}},\] applying creative telescoping produces a certificate $Q=\frac{-bxr(r-1)(k-s)}{(k-r+1)(r-(s+1))}$ corresponding to the telescoper $P=bsxS_s-(k+1)S_k+x(k-s)$. Clearly, $Q$ has singularity at $r=s+1$, which prevents us from certifying that $P$ is an annihilator for this sum. Similar to the toy example, the fix here is simply to acknowledge the relation \eqref{eq:ct} only up to $s-1$: \begin{equation}\label{eq:ctrelsing1} \sum\limits_{r=1}^{s-1} P\cdot \text{summand}-\sum\limits_{r=1}^{s-1}(S_r-1)\cdot Q \cdot \text{summand} = 0. \end{equation} The game is now to rearrange terms so that we can get the annihilator for $\sum_{r=1}^s \text{summand}$. The right summation in \eqref{eq:ctrelsing1} is a telescoping sum that can now be evaluated easily at the boundary terms. We refer to this as the ``delta part''. The remaining inhomogeneous terms will come from the left sum of~\eqref{eq:ctrelsing1}, where we insert the $r=s$ term and compensate for the insertion with $-P\cdot\text{summand}|_{r=s}$, which we will call the ``compensated term''. Thus, \eqref{eq:ctrelsing1} becomes \begin{equation}\label{eq:ctrelsing2} \sum\limits_{r=1}^{s} P\cdot \text{summand}+\underbrace{\left(\text{compensated term}+\text{delta part}\right)}_{\text{inhomogeneous terms}}=0. \end{equation} We are closer to our goal, but run into a new problem that $P$ contains a shift in $s$, which occurs in the upper limit. So, we cannot factor out this $P$ without a little more work. We will address this in the next section. Note that if the number of compensated terms required is too large, it may be better to consider another strategy, such as rewriting terms in some alternative (but equivalent) form to avoid the poles entirely (as was suggested in the analysis for~\eqref{eq:binomialsum}). But in general, there may not be an easy way rewrite such expressions: the ease in which this is possible depends on the properties of the objects at hand. \subsection{Pulling Operators Outside of the Sum}\label{sec:commute} We also found that the telescoper~$P$ does not commute with our summation. To illustrate this a little more generally, suppose the operator~$P$ is in the Ore algebra generated by the shift operator~$S_s$. In other words, suppose that $P$ can be written as some polynomial in $S_s$, for example, \[ P=p_0+p_1S_s+\cdots+p_jS_s^j, \] where the $p_i$ ($i=0,\ldots,j$) may be rational functions containing the parameter~$s$. If we apply such a $P$ to a summation of the form $\sum_{k=1}^{m+s-1}H(s,k)$, then we face the issue that the application not only affects the parameter $s$ in the summand $H(s,k)$, but also the upper limit $m+s-1$. Then if we apply $P$ to the whole sum, we get \begin{equation}\label{eq:commute1} p_0\!\!\sum\limits_{k=1}^{m+s-1}\!\!H(s,k) \;+\; p_1\!\sum\limits_{k=1}^{m+s}\!H(s+1,k) \;+\cdots+\; p_j\!\!\!\!\!\sum\limits_{k=1}^{m+s+j-1}\!\!\!\!\!H(s+j,k). \end{equation} It is quite obvious that this is not the same as applying $P$ to only the summand $H(s,k)$: \begin{equation}\label{eq:commute2} \sum\limits_{k=1}^{m+s-1} \Bigl(p_0H(s,k)+p_1H(s+1,k)+\cdots +p_jH(s+j,k)\Bigr). \end{equation} However, we can simulate the ``factoring out'' of the $P$ in~\eqref{eq:commute2} if we peel off a sufficient (and finite) number of terms from each sum in~\eqref{eq:commute1} such that its upper limits are all $m+s-1$. Then~\eqref{eq:commute2} can be replaced by the peeled version of~\eqref{eq:commute1} with $P$ on the outside and the removed terms can then be merged with the inhomogeneous part. Continuing with our example from the previous section, we can see that we are a little lucky in that the $P$ in \eqref{eq:ctrelsing2} only has one shift in $s$ (the ideal case being that $P$ has no shift in~$s$). So in this case we just have to peel off one term $P\cdot\text{summand}|_{r=s+1}$, which we will call the ``comp $S$-shift''. Thus, \eqref{eq:ctrelsing2} becomes \[P\cdot \sum\limits_{r=1}^{s} \text{summand}+\underbrace{\left(\text{comp $S$-shift}+\text{compensated term}+\text{delta part}\right)}_{\text{inhomogeneous terms}}=0.\] Let $R$ be the annihilator for the inhomogeneous terms. Then $R\cdot P$ is the annihilator for $\sum_{r=1}^s\text{summand}$. Sometimes, $R$ is easy to compute, sometimes it is not. The next section addresses how to deal with $R$ if it is not. \subsection{Treatment of Inhomogeneous Parts} \label{sec:inhom} In all of our examples, the adjustment of the summation limits to avoid singularities in the certificates was completed first. After that, it is a~priori not clear if one should proceed to adjust for the operator commuting with the summand or to fill in the terms that were removed for the singularities. The (human) decision may depend on the number of singularities (bounded above by the degree of the denominators of the certificates), where the singularities are located, how complicated the telescoper expression is, and whether or not the lower/upper boundaries are influenced by the telescopers. This makes it difficult to automate adjustments effectively. Once we have collected all of the inhomogeneous parts, we face the question of how to process them. In principle, we could just write them down and try to use Mathematica's symbolic power to simplify them as much as possible. Unfortunately, this does not work very well on our problem, with the only progress being that some of the inhomogeneous parts conveniently collapse to zero (so we remove them). Instead, we take advantage of the fact that we can write all of the inhomogeneous parts as different shifts and substitutions of the given summand. More precisely, the total of some of these parts can be expressed as an operator applied to the summand, followed by a substitution. Then, an annihilator for this total can be derived by applying the closure properties ``application of an operator'' and later follwed by an ``integer-linear substitution''. In this way, we completely avoid dealing with expressions like Mathematica's DifferenceRoots. We illustrate a hands-on approach of finding an operator to apply with an example that comes from the inhomogeneous parts for $G^{(1)}$, which contain hypergeometric $_2F_1$ series that we write out in full detail below. The strategy involves observing patterns in a complicated expression to construct an operator that would give the same result when it is applied to some simpler version of the expression. Given \begin{align}\label{eq:inhomG1} \begin{split} \frac{(b-1)(m+s+1)(bx)^{m+s+1}}{b^2x}\cdot {}_2F_1\left(1-s,-m-s;2;\frac{b-1}{b}\right)\\ -\frac{(b-1)(m+bs)(bx)^{m+s}}{b^2}\cdot {}_2F_1\left(1-s,1-m-s;2;\frac{b-1}{b}\right)\\ +\frac{(b-1)(s+1)(bx-1)(bx)^{m+s}}{b}\cdot {}_2F_1\left(1-m-s,-s;2;\frac{b-1}{b}\right), \end{split} \end{align} we can see that selecting the operator \begin{align}\label{eq:inhomG1newop} \frac{(b-1)(m+s+1)}{b^2x(bx)}S_m^2-\frac{(b-1)(m+bs)}{b^2(bx)}S_m+\frac{(b-1)(s+1)(bx-1)}{b(bx)}S_s \end{align} and applying it to $(bx)^{m+s}\cdot {}_2F_1\bigl(1-s,2-m-s;2;\frac{b-1}{b}\bigr)$ results in~\eqref{eq:inhomG1}. Such an operator is certainly not unique and there is no hard rule to construct one, but with a little experimentation and a focused goal of attaining one reasonably ranked operator to be applied to one function (rather than three different ones), it can be deduced in a practical amount of time if such an operator exists. We can also make minor adjustments such as moving the factor $b-1$ from the operator to the function in the example above and deduce an annihilating operator in this way. However, minor changes like these will not bring about much improvement in computational efficiency. Thus, we want to emphasize that this part of the game is not really algorithmic, but rather part of the human's job to decide what is the most elegant. The annihilator for \eqref{eq:inhomG1} can therefore be obtained by ``applying''~\eqref{eq:inhomG1newop} (in the sense of closure properties) to the annihilator of this single expression. This process is much faster than trying to directly compute an annihilating ideal of the sum of hypergeometric series, with the added benefit that the order of the recurrence will usually be smaller compared to applying the \textsc{Annihilator} command directly to expressions such as~\eqref{eq:inhomG1}. In our particular example, the latter method even caused the program to crash. We can therefore see that directly constructing an operator by closely inspecting patterns in the inhomogeneous parts can be computationally effective. In a way, the fact that we use $_2F_1$'s in the previous argument is inconsequential to the construction of the operator to be applied (it could have easily be replaced with a symbolic expression that exhibits similar shift behaviors, for example). Thus, an annihilating operator for the inhomogeneous terms can be deduced in this way \textit{before} administering any substitutions. As mentioned before, this could also involve removing all terms that would collapse to zero anyway and building from scratch the new operator by only using the shifts needed to produce compensation terms that may have resulted from the treatment of singularities and commutation. Applying this strategy to the inhomogeneous parts of $G_s^{(2)}$, we get an acting operator that lies in the Ore algebra $\mathbb{Q}(b,k,m,s,x)(S_k,S_s)$ and has the support \begin{equation}\label{eq:G2R2act} \lbrace S_k^2S_s^3,S_kS_s^3,S_kS_s^2,S_s^3,S_s^2,S_k,S_s,1 \rbrace. \end{equation} Comparing this with \eqref{eq:inhomG1newop} gives an indication why the annihilator computation for $G^{(2)}_s$ would take longer. Furthermore, after ``applying'' this new operator to the annihilator of the summand, we still had the additional step of making a substitution $k\rightarrow m+s$, costing us nearly 30 hours. See the table in Figure \ref{fig:intermediaryresults} for a summary of the shapes and sizes of these objects. It is clear then, that while our problem benefited from this strategy, it is still not optimal and we proceed to present other ways to improve computational efficiency. \subsection{Substitution Speedup}\label{sec:sub} The collection of all inhomogeneous parts and its subsequent removal via its annihilator can eat up a lot of computation time depending on how complicated these parts are. In particular, we experience this in the computation of the annihilators for the inhomogeneous part of $G_s^{(2)}$ when applying the closure property of integer-linear substitution. Thus, as an alternative to blindly applying the corresponding computer command and not knowing what is going on behind the scenes while waiting patiently for the code to finish, we can take better control of the process by making a few additional optimizations, resulting in a significant speedup of computation time. With the ``application of an operator'' closure property using the method as described in the previous section we were able to produce an annihilating ideal, with its Gr\"{o}bner basis denoted by $U^{(2)}$ in the Ore algebra $\mathbb{Z}(b,k,m,s,x)(S_k,S_s)$, for the combined inhomogeneous parts of $G_s^{(2)}$, denoted by $H(s,k)$, but without the necessary substitution $k\rightarrow m+s$ according to the upper summation bound. This implies that it is necessary to apply the closure property ``integer-linear substitution'' to~$U^{(2)}$. It turns out that the above Gr\"obner basis $U^{(2)}$ has the set of irreducible monomials $\{S_s,S_k,1\}$ and hence is of holonomic rank~$3$. The theory tells us that for $H(s,m+s)$ we should expect a recurrence of order~$3$ in~$s$, however, we find by trial-and-error that an order~$2$ operator already works (which is in agreement with the result of the longer computation in Section~\ref{sec:inhom}). For constructing such a recurrence, we want to find an operator~$T$ in the left ideal generated by $U^{(2)}$ with the support $\{ S_s^2 S_k^2, S_s S_k, 1 \}$, corresponding to a bivariate recurrence involving the terms \[ H(s+2,k+2),\; H(s+1,k+1),\; H(s,k), \] which, after the substitution $k\rightarrow m+s$, turns into the desired second-order recurrence for $H(s,m+s)$. We therefore make the following ansatz for~$T$: \[ T=c_2(k,s)S_k^2S_s^2+c_1(k,s)S_kS_s+c_0(k,s), \] where the coefficients $c_0(k,s),c_1(k,s),c_2(k,s)$ are to be determined. Gr\"obner basis theory tells us that this~$T$ is an element of the annihilating ideal (in other words: represents a valid recurrence for $H(s,k)$) if and only if it reduces to~$0$ by the Gr\"obner basis~$U^{(2)}$. Reducing the ansatz~$T$ by $U^{(2)}$ results in a linear combination of the basis monomials $\{S_s,S_k,1\}$, i.e., an Ore polynomial of the form \[ E_2(k,s,c_0,c_1,c_2)S_s + E_1(k,s,c_0,c_1,c_2)S_k + E_0(k,s,c_0,c_1,c_2) \] with rational functions $E_0,E_1,E_2$. This polynomial is zero if and only if $E_0=E_1=E_2=0$, so we proceed to solve this system for $c_0, c_1, c_2$. Ultimately, this procedure gives us an Ore polynomial with the support we want (if we had chosen the support of $T$ too small, we would have realized that by getting no solution of the system $E_0=E_1=E_2=0$). This operator~$T$ annihilates the inhomogeneous parts after substituting $k\rightarrow m+s$ in its coefficients and omitting the shift operator $S_k$. This is essentially the same as saying: substitute $k\rightarrow m+s$ into the recurrence $T\cdot H(s,k)$. However, since this substitution tends to decrease the size of expressions (it reduces the number of variables), it is desirable to perform it as early as possible, and not only at the very end. Indeed, we were able to speed up our computation significantly by performing the substitution already during the reduction of the monomials of~$T$, but care has to be taken: to match the leading monomials, we may have to multiply by (a power of) $S_k$, and for this (noncommutative) multiplication one needs to keep the variable~$k$. However, it can be substituted immediately afterwards. This leads to a less dramatic swell of expressions. Other sources of speedup include a manual selection strategy of Gr\"obner basis elements to be used for the reduction, and the order in which these reductions are made. It might be worthwhile to note that there are places in this process where we got lucky: as fate would have it, the coefficient $E_2$ of $S_s$ is zero (but only for $k=m+s$), and this gives us the luck of finding a recurrence of order two instead of three! This procedure has now reduced our total computation time from 30 (using the strategy of the previous section) to 1.4 hours. \begin{figure}[ht] \begin{center} \setlength{\tabcolsep}{0.1cm} \renewcommand{\arraystretch}{1.2} \resizebox{\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|l|l|} \hline \rule{0pt}{10pt}Sum& Method & Object & Time (s) & Rank & Shape & Bytes\\[2pt] \hline \multirow{8}{*}{$G^{(1)}_s$}& \multirow{2}{*}{CT} & \multirow{2}{*}{I1}&\multirow{2}{*}{fast}&\multirow{2}{*}{2}&$\lbrace S_k,S_s,1\rbrace,\lbrace S_s^2,S_s,1\rbrace$ & \multirow{2}{*}{3720}\\ & & & & & $(1,1,0,1),(0,1,0,1)$ & \\\cline{2-7} & \multirow{2}{*}{CT} & \multirow{2}{*}{P1}&\multirow{2}{*}{fast}&\multirow{2}{*}{1}&$\lbrace S_s,1\rbrace$ & \multirow{2}{*}{1368}\\ & & & & & $(1,1,0,0)$ & \\\cline{2-7} & \multirow{2}{*}{3.3--3.5} & \multirow{2}{*}{R1}&\multirow{2}{*}{fast}&\multirow{2}{*}{3}&$\lbrace S_s^3,S_s^2,S_s,1\rbrace$ & \multirow{2}{*}{339736}\\ & & & & & $(6,3,6,6)$ & \\\cline{2-7} & \multirow{2}{*}{R1$**$P1} & \multirow{2}{*}{ann1}&\multirow{2}{*}{fast}&\multirow{2}{*}{4}&$\lbrace S_s^4,S_s^3,S_s^2,S_s,1\rbrace$ & \multirow{2}{*}{645528}\\ & & & & & $(7,4,6,6)$ & \\ \hline \multirow{12}{*}{$G^{(2)}_s$}&\multirow{2}{*}{CT}& \multirow{2}{*}{I2}&\multirow{2}{*}{7}&\multirow{2}{*}{3}&$\lbrace S_s^2,S_k,S_s,1\rbrace,\lbrace S_k,S_s,S_k,1\rbrace,\lbrace S_k^2,S_k,S_s,1\rbrace$ & \multirow{2}{*}{15720}\\ & & & & & $(1,1,1,3),(1,0,0,1),(2,1,1,2)$ & \\\cline{2-7} & \multirow{2}{*}{CT} & \multirow{2}{*}{P2}&\multirow{2}{*}{70}&\multirow{2}{*}{3}&$\lbrace S_s^3,S_s^2,S_s,1\rbrace$ & \multirow{2}{*}{5120}\\ & & & & & $(1,1,0,0)$ & \\\cline{2-7} & \multirow{2}{*}{3.3--3.5}& \multirow{2}{*}{R2act}&\multirow{2}{*}{330}&\multirow{2}{*}{3}&$\lbrace S_s^2,S_k,S_s,1\rbrace,\lbrace S_k,S_s,S_k,1\rbrace,\lbrace S_k^2,S_k,S_s,1\rbrace$ & \multirow{2}{*}{50918792}\\ & & & & & $(10,5,5,11,12),(10,4,4,9,10),(11,5,5,11,12)$ & \\\cline{2-7} & \multirow{2}{*}{\ref{sec:inhom}} & \multirow{2}{*}{R2 (slow sub)}&\multirow{2}{*}{107870}&\multirow{2}{*}{2}&$\lbrace S_s^2,S_s,1\rbrace$ & \multirow{2}{*}{953768}\\ & & & & & $(8,4,9,9)$ & \\\cline{2-7} & \multirow{2}{*}{\ref{sec:sub}} & \multirow{2}{*}{R2 (fast sub)}&\multirow{2}{*}{4200}&\multirow{2}{*}{2}&$\lbrace S_s^2,S_s,1\rbrace$ & \multirow{2}{*}{953768}\\ & & & & & $(8,4,9,9)$ & \\\cline{2-7} & \multirow{2}{*}{R2$**$P2}& \multirow{2}{*}{ann2}&\multirow{2}{*}{fast}&\multirow{2}{*}{5}&$\lbrace S_s^5,S_s^4,S_s^3,S_s^2,S_s,1\rbrace$ & \multirow{2}{*}{3931560}\\ & & & & & $(10,5,10,10)$ & \\ \hline \multirow{2}{*}{$G_s(x)$} & \multirow{2}{*}{ann1+ann2} & \multirow{2}{*}{ann} & \multirow{2}{*}{16} & \multirow{2}{*}{5} & $\lbrace S_s^5,S_s^4,S_s^3,S_s^2,S_s,1\rbrace$ & \multirow{2}{*}{3931848}\\ & & & & & $(10,5,10,10)$ & \\ \hline \end{tabular}} \end{center} \caption{Some intermediary results for the split sum case. ``CT'' refers to the direct application of creative telescoping prior to any modifications, ``I'' refers to the annihilator of all of the inner sums, ``P'' refers to the telescoper resulting from CT applied to I, ``R2act'' refers to the result of the (human) constructed operator \eqref{eq:G2R2act} for the inhomogeneous parts acting on I2, ``R'' refers to the collective annihilator of the inhomogeneous parts (achieved by making an appropriate substitution into R2act), ``R$**$P'' indicates non-commutative multiplication to obtain the annihilator of the whole sum, ``ann'' is obtained by applying the closure property of addition by computing \textsc{DFinitePlus}[ann1,ann2], ``fast'' indicates computations of $<1$ second, ``Rank'' refers to the holonomic rank of the object, ``Shape'' provides information about the support (top) of the generators for the annihilating ideal and their coefficient degrees (bottom) in the variable order $(x,b,m,s)$ (and order $(x,b,m,s,k)$ for R2act).} \label{fig:intermediaryresults} \end{figure} \subsection{Gamma Insertions}\label{sec:gamma} In this section, we focus on dealing with the triple sum \eqref{eq:triplesum}. We remark that the naive approach (applying creative telescoping directly three times, sequentially or in one parallel step, but without any of the required adjustments) produces an incorrect first-order recurrence for $G_s(x)$, namely \[(1-bx) S_s + (x-1)\] (plugging in a few values will confirm its incorrectness). However, this computation was quick. Unfortunately, the certificates \[\frac{b k - i k + b i k - b r + i r - b i r}{b^2 (i - r) (1 + r)},\ \frac{r(bx-1)}{r-(s+1)},\ \frac{i k - i r + b^2 i x - b i k x + b^2 i r x}{ b^2 (i - r) (1 + r)},\] have the issue with the singularity at $r=s+1$. Moreover, since the telescoper contains a shift in $s$, the problem in dealing with the commutation still persists. Thus, the nice looking first-order recurrence is a little bit deceiving. This next strategy differs from the previous sections in that it takes a more holistic approach by treating \eqref{eq:triplesum} all at once and makes adjustments to the single summand in order to deal with the ``unnatural boundary'' problems simultaneously with the introduction of a new parameter. This idea can be dated back to the thesis of Wegschaider \cite[2, Section 2.7.3]{Wegschaider97} with the caveat that an extra parameter would increase computation time. For our problem however, we did not observe a significant increase in this regard. The issue of ``unnatural boundaries'' occurs whenever all three binomial coefficients in our summand are nonzero beyond the limits of our summation. Assuming that $m$ and $s$ are fixed positive integers, we let $B_1$ denote the collection of points $(k,i,r)\in\mathbb {Z}^3$ for which the summand in~\eqref{eq:triplesum} is nonzero, and let $B_2$ denote all points $(k,i,r)\in\mathbb {Z}^3$ that are inside the summation ranges. While $B_2$ corresponds to all integer points of a bounded polytope in $\mathbb {R}^3$, the set $B_1$ is unbounded (see Fig.~\ref{fig:unbounded}). We essentially want to sum over all points $(k,i,r)$ in the intersection of $B_1$ and $B_2$ (depicted in blue), while we want to avoid those points in $B_1\setminus B_2$ (depicted in red). Hence, the set of ``bad points'' also forms an infinite polytope and we remove these points using gamma functions. \begin{figure}[ht] \centering \includegraphics[scale=0.4]{unbounded.png} \caption{``Bad'' points $B_1\setminus B_2$ (red) and ``good'' points $B_1\cap B_2$ (blue) for fixed $m=15$ and $s=10$.} \label{fig:unbounded} \end{figure} We first recall that $\Gamma(k)$ has poles exactly at the non-positive integers, and therefore $\frac{1}{\Gamma(k)}$ has zeros at $k=0,-1,-2,\ldots$. Then the summand can be modified by the following gamma functions in order to enforce natural boundaries: \[ C(\varepsilon,i,k,r):=\binom{s}{r}\cdot\binom{k-1}{r-1}\cdot\binom{r-1}{i}\cdot \frac{\Gamma(k+\varepsilon)}{\Gamma(k)}\cdot \frac{\Gamma(r-i-(k-m)+\varepsilon)}{\Gamma(r-i-(k-m))} \] with some new symbol~$\varepsilon$. Upon sending $\varepsilon\to0$ we get back the original product of binomial coefficients, while the two additional factors force the expression $C(\varepsilon,i,k,r)$ to be zero whenever $k\leq0$ or $r-i-(k-m)\leq0$. In other words, the introduction of the reciprocal gammas is balanced out by the perturbed gammas in the numerators, which conveniently avoids division by zero and gives an equivalent (final) result for the original problem after setting $\varepsilon=0$. We comment that the gamma function with the perturbation is a hypergeometric (and therefore holonomic) term that takes on a finite value away from the pole. We have now achieved a summand such that creative telescoping can be applied without having to worry about undesirable terms from beyond the summation boundaries. Another consequence of this is that the telescoper can be pulled out of the summation. Unfortunately, this procedure does not completely remove the threat of singularities that may show up in the certificates, so inhomogeneous parts can surface here and must be treated (this turned out to be the case in our situation). Except for that, the net effect of introducing gammas is so that we can apply the creative telescoping algorithm (three times) to the triple sum \[ \sum_{k=1}^{m+s-1}\sum_{r=1}^s\sum_{i=0}^{r-1-(k-m)}C(\varepsilon,i,k,r)\cdot \frac{b-1}{(-b)^{r-i}}(bx)^k \] and afterwards take the limit $\varepsilon\to0$ to obtain the desired recurrence for $G_s(x)$. Unfortunately, we do not get the minimal-order recurrence, but a fourth-order one. This has allowed us to reduce our computation time to about 11 minutes. However, this is not yet the end of the story. In Fig.~\ref{fig:unbounded} we were trapped by Mathematica's definition of the binomial coefficient (cf. the discussion in Section~\ref{sec:background}): actually, the summand is zero for $k\leq0$ if we employ the intended ``correct'' definition, which implies that $\binom{n}{k}$ is zero unless $0\leq k\leq n$. The conditions for the summand to be nonzero (implied by the three binomial coefficients) somehow correspond to the summation bounds (given by the three summation quantifiers), which is illustrated in the following table (it is actually a curiosity of this problem that we can find such correspondence): \medskip \begin{center} \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{0.2cm} \begin{tabular}{|c|c|c|} \hline \multirow{2}{1.6cm}{Factor in Summand} & Nonzero Range & Summation Bounds\\ & ($B_1$) & ($B_2$) \\ \hline \rule[-7pt]{0pt}{19pt}$\binom{s}{r}$ & $0\leq r\leq s$ & $1\leq r \leq s$\\ \hline \rule[-7pt]{0pt}{19pt}$\binom{k-1}{r-1}$ & $0\leq r-1\leq k-1$ & $1\leq k \leq m+s-1$\\ \hline \rule[-7pt]{0pt}{19pt}$\binom{r-1}{i}$ & $0\leq i\leq r-1$ & $0\leq i \leq r-1-(k-m)$\\ \hline \end{tabular} \end{center} \medskip After close inspection of this table, it becomes evident that only one gamma correction is actually needed (and hence Fig.~\ref{fig:unbounded} does not show the true situation). We can therefore redefine \[ C(\varepsilon,i,k,r):=\binom{s}{r}\cdot\binom{k-1}{r-1}\cdot\binom{r-1}{i}\cdot \frac{\Gamma(r-i-(k-m)+\varepsilon)}{\Gamma(r-i-(k-m))}. \] This observation speeds up the computations significantly, and the winning time is 30 seconds. Moreover, we obtain the minimal recurrence of order three. The reader may now wonder how we can tell the HolonomicFunctions package that this computation should be executed with a different definition of the binomial coefficient (that differs from Mathematica's)? The answer is: we do not have to, since it is completely irrelevant (from the viewpoint of the package), because both versions of the binomial coefficient satisfy the very same recurrence equations, as we have seen in Section~\ref{sec:background}! The difference only becomes relevant when we evaluate the summand at particular values (which is done outside of the package), e.g., when checking initial conditions. \begin{figure} \begin{center} \setlength{\tabcolsep}{0.2cm} \begin{tabular}{|c|l|c|} \hline \rule{0pt}{10pt}Approximate & \multicolumn{1}{|c|}{Strategies} & \multirow{2}{*}{Result}\\ Comp. Time & \multicolumn{1}{|c|}{Implemented} & \\[2pt] \hline \multirow{4}{*}{30 hours} & & \multirow{9}{4cm}{fifth-order recurrence in the ideal generated by the guessed recurrence} \\[-10pt] & -- split sums\rule{0pt}{10pt} $G^{(1)}_s$ and $G^{(2)}_s$ & \\ & -- sing./comm. corrections & \\ & -- closure properties & \\[2pt] \cline{1-2} \multirow{5}{*}{1.4 hours} &&\\[-10pt] & -- split sums\rule{0pt}{10pt} $G^{(1)}_s$ and $G^{(2)}_s$ & \\ & -- sing./comm. corrections & \\ & -- closure properties & \\ & -- substitution speedup & \\[2pt] \hline \multirow{3}{*}{11 minutes} & -- the triple sum\rule{0pt}{10pt} \eqref{eq:triplesum} & \multirow{3}{4cm}{fourth-order recurrence in the ideal generated by the guessed recurrence} \\ & -- two gamma insertions & \\ & -- sing. corrections & \\[2pt] \hline \multirow{3}{*}{30 seconds} & -- the triple sum\rule{0pt}{10pt} \eqref{eq:triplesum} & \multirow{3}{4cm}{same third-order recurrence as the guessed recurrence} \\ & -- one gamma insertion & \\ & -- sing. corrections & \\[2pt] \hline \multirow{2}{*}{$<$ 1 second} & -- the triple sum \eqref{eq:triplesum} & \multirow{2}{4cm}{a generating function with $G_s(x)$ coefficients}\\ & -- residues \cite{BostanLairezSalvy17} & \\[2pt] \hline \end{tabular} \end{center} \caption{Results and Comparisons}\label{fig:results} \end{figure} \subsection{Generating Functions of Binomial Sums}\label{sec:gf} In the final section of this chapter, we would like to highlight another symbolic computation approach to deal with binomial sums. It has a different flavor from creative telescoping but nevertheless allows us to also solve our problem in a most elegant way. In 2015, Bostan, Lairez, and Salvy studied the representation of the generating functions of binomial sums via integrals of rational functions \cite{BostanLairezSalvy17}. In their paper, they acknowledged the critical role that the certificate plays in the creative telescoping method and some of the computational problems that it creates. Thus, they sought to use complex analysis to provide an alternate way to view binomial sums without the need to deal with certificates. However, one would still need to take care of unnatural boundaries before applying their method. Fortunately, their implementation allows us to do this by accepting Heaviside functions in the input, which is analogous to our usage of gamma quotients with $\varepsilon$ in Section \ref{sec:gamma}. This means that by representing our triple sum as \[ \sum_{k=1}^{\infty}\sum_{r=1}^{s}\sum_{i=0}^{\infty}\binom{s}{r}\binom{k-1}{r-1}\binom{r-1}{i}\frac{b-1}{(-b)^{r-i}}(bx)^k\cdot H(r-1+(k-m)-i), \] we can apply their package, and it returns the rational function \[ \frac{xyz(b-1)}{(z-1)(y-1)(1-(1-x)z-yxb)} \] for which the generating function \[ \sum\limits_{m=0}^{\infty}\sum\limits_{s=0}^{\infty} G_s(x)y^mz^s \] is its residue, with $z$ indexing the parameter~$s$ and $y$ indexing the parameter~$m$. In this way, the non-positivity of the coefficients (our triple sum) can be read off directly under the original conditions of $b>1$ and $x\in[0,1)$. Their method also has some limitations (the input class of admissible expressions) which we will not go into detail here, but it is clear that on the problem at hand it works extremely well: the whole computation to derive the above rational function took less than a second! \section{Conclusions and Future Work} In this expository article, we demonstrated the usage of the HolonomicFunctions package to deal with an intricate triple sum coming from an application in quasi-Monte Carlo integration. We had a couple of objectives in mind for this paper: first, we felt the need to deliver some technical details for a key lemma in~\cite{WiartWong20}, where only the main ideas of the computer algebra proof were mentioned; second, we wanted to provide a somewhat easy-to-digest description for proving special function and combinatorial identities with the help of the computer by expounding on the difficulties that may arise in similar applications and highlighting a few creative ways to cure them. We hope that we have convinced the reader that it is not always so cut-and-dry to prove a given identity with the holonomic systems approach. While in principle it allows one to prove holonomic identities in an automated way, we have seen that in practice, even with using current state-of-the-art software tools, many steps in the proof require human interaction. At many positions in the proving process, we had to make a choice on how to proceed, and the decision may both influence the optimality of the final result and the time that is required to obtain it. Fig.~\ref{fig:results} gives an impressive overview how much difference in runtime such choices can make. It is also clear that some of these strategies have bits and pieces in existing literature. Some were already mentioned throughout the text, and we summarize here. For example, the idea of splitting sums to simplify computation problems was discussed by Prodinger \cite{Prodinger96}, Wegschaider briefly proposed the addition of a parameter for singularity removal in his thesis \cite{Wegschaider97} (another application of this can be found in \cite{LyonsPauleRiese02}), some of the details of treating boundary terms in multisums appeared in \cite{AndrewsPauleSchneider05} and \cite{ChyzakMahboubiSibutPinoteTassi14}, and of course, we directly applied the generating function method from \cite{BostanLairezSalvy17}. Therefore, it is pertinent to remark that the contributed value of this paper is to present a new context with which to deal with these kinds of sums and to compile various strategies into a user-friendly format. It might also be worthwhile to note that despite these ``known'' issues dating back to the 1990s, not much progress has been made to automate them (and for good reason). This paper further serves the purpose of underlining that much work can still be done on this front! With all that being said, a future plan is to automate some of the proof steps that had to be done ``by hand'' in this case study, for example, in the analysis of singularities in the certificate(s) and dealing with the issue of commutation. \subsection*{Acknowledgements} We would like to thank the organizers of CASC 2020 for providing an occasion and opportunity to give a talk about the work on which this article is based. We were encouraged by the positive feedback from the audience, which motivated this post-proceedings contribution. We would especially like to acknowledge Pierre Lairez for pointing us to his paper \cite{BostanLairezSalvy17} and for demonstrating how to make the computation in Section~\ref{sec:gf} with his binomial sums Maple package. We thank the reviewers for their careful reading, which helped us improve this manuscript greatly, particularly the second reviewer who pointed out some related literature and provided insightful criticism. We also express our appreciation to Hao Du and Ali Uncu for their support and helpful commentary.
proofpile-arXiv_059-15764
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Appendix} \input{proofs} \subsubsection*{Acknowledgments} \vspace{-.5em} Lizhen Lin would like to thank Dong Quan Nguyen for very helpful discussions. Lizhen Lin acknowledges the support from NSF grants IIS 1663870, DMS Career 1654579 and a DARPA grant N66001-17-1-4041. Bayan Saparbayeva was partially supported by DARPA N66001-17-1-4041. \clearpage \bibliographystyle{apalike} \section{Conclusion and Discussion }\label{sec:conclusion} We propose a general scheme for solving non-convex optimization on manifolds which yields theoretical guarantees of convergence to a stationary point when the objective function is non-convex. When the objective function is convex, it leads to accelerated convergence rates for a large class of first order methods, which we show in our numerical examples. One of the interesting future directions we want to pursue is proposing accelerated algorithms on statistical manifolds (manifolds of densities or distributions) by employing information-geometric techniques, and applying the algorithms to accelerate convergence and mixing MCMC algorithms. \section{Introduction} \label{sec-intro} Optimization is a near ubiquitous tool used in a wide-range of disciplines including the physical sciences, applied mathematics, engineering and the social sciences. Formally, it aims to maximize or minimize some quantitative criteria, namely, the objective function with respect to some parameters of interest. In many broad, complex learning in modern data science, the parameters are naturally defined over to be on a \emph{manifold}. The emerging field of statistics on manifolds based on Fr\'echet means \citep{rabibook, linclt} can be viewed as one of the notable examples of optimization on general manifolds. Another example can be found in building scalable recommender systems where extracting a low-rank matrix involves an optimization problem over a Grassmann manifold \citep{boumal}. Recent development in geometric deep learning, where the input or output layer constrained to be on a Riemannian manifold \citep{Lohit2017LearningIR, Huang2017ARN, Huang2017DeepLO}, constitutes another important class of applications. Other applications arise in diverse areas ranging from medical imaging analysis, Procrustes shape matching, dimension reduction, dynamic subspace tracking, and cases involving ranking and orthogonality constraints--among many others. This proliferation of manifold-valued applications demands fundamental development of models, algorithms and theory for solving optimization problems over non-Euclidean spaces. The current literature on optimization over manifolds mainly focuses on extending existing Euclidean space algorithms, such as Newton's method \citep{2014arXiv1407.5965S, ring12}, conjugate gradient descent \citep{Edelman1998, Nishimori2008}, steepest descent \citep{steepest}, trust-region methods \citep{Absil2007, NIPS2011_4402} and others. Many of the objective functions in manifold optimization problems are very complex. One of key challenges for solving such problems lies in the difficulty in verifying the convexity and the degree of convexity of the objective function. Current approaches cannot adapt to the complexity of the problem at hand in manifold spaces. We take a major step to address these issues by proposing a general scheme to solve convex and non-convex optimization problems on manifolds using gradient-based algorithms originally designed for convex functions. The key idea is to ``convexify'' the objective function by adding a multiple of the squared retraction distance. The proposed algorithm does not require knowledge of whether the objective function is convex but will automatically converges to an optima if the function is strongly convex. When the objective is non-convex, it achieves rapid convergence to a stationary point. The proposed algorithm is a generalization of Nesterov acceleration \citep{Nesterov2004}, which improves the convergence rate of gradient descent algorithms. Our algorithm (which we call $\mathcal{A}_2$) takes any general existing optimization method (which we call $\mathcal A$), originally designed for convex functions, and converts it into a method applicable for non-convex functions. Similar schemes have been explored for optimization problems in Euclidean space \citep{pmlr-v84-paquette18a}. Generalizations to arbitrary manifolds, however, require fundamentally novel theoretical development. In the Euclidean case, the gradient steps are taken towards lines, whereas for the manifold we use the retraction curves which crucially affects the result and raises the difficulty in proving convergence. Also for manifolds, it is not trivial to correctly convexify the 'weakly-convex function', a broad class of non-convex functions on manifolds we consider which account for most of the interesting examples of non-convex functions in machine learning. We propose a novel idea to convexify the objective locally with the help of the retraction. Key features of our algorithm include adaptation to the unknown weak convexity of the objective function and automatic Nesterov acceleration. The proposed algorithm can be used to accelerate a broad class of $\mathcal A$ algorithms including gradient descent as well as parallel optimization approaches \citep[see][]{lizhennips2018}. Our paper is organized as follows: In Section~\ref{sec:related}, we introduce related work on accelerated optimization algorithms. Next, we present our proposed acceleration algorithm on manifolds in Section~\ref{sec: proposed} and present theoretical convergence results. In Section~\ref{sec-simu}, we consider a simulation study of estimating Fr\'echet means and a real data example using the Netflix prize data set in a matrix completion problem. \subsection{Proof to Theorem 2} \label{suppl-proofs} We first introduce a simple lemma. \begin{lemma} Suppose the sequence $\{\alpha_k\}_{k\geq1}$ is produced by $\mathcal{A}_2.$ Then, the following bounds hold for all $k\geq1$ \begin{equation*} \frac{\sqrt{2}}{k+2}\leq\alpha_k\leq\frac{2}{k+1}. \end{equation*} \end{lemma} \begin{proof}[Proof of Theorem 2] The descent condition in \begin{align}\label{C1} \begin{split} {\rm dist}\big(0_{\bar{\theta}_k}, \partial h(\bar{\theta}_k, \theta_{k-1})\big)&<\kappa_k d_{\mathcal{R}}(\bar{\theta}_k, \theta_{k-1}) \quad \text{and} \\ \quad h_{\kappa_k}(\bar{\theta}_k, \theta_{k-1})&\leq h_{\kappa_k}(\theta_{k-1}, \theta_{k-1}), \end{split} \end{align} implies $\{f(\theta_k)\}_{k\geq0}$ are monotonically decreasing. From this \begin{equation}\label{T-1} \begin{split} f(\theta_{k-1})&=h_{\kappa}(\theta_{k-1}, \theta_{k-1})\\ &\geq h_{\kappa}(\bar{\theta}_k, \theta_{k-1})\\ &\geq f(\theta_k)+\frac{\kappa}{2}d_{\mathcal{R}}^2(\bar{\theta}_k, \theta_{k-1}). \end{split} \end{equation} Using condition \eqref{C1}, we apply Lemma 2 with $\vartheta=\theta_{k-1}, \theta=\bar{\theta}_k$ and $\varepsilon=\kappa Kd_{\mathcal{R}}(\bar{\theta}_k, \theta_{k-1});$ hence \begin{equation*} {\rm dist}(0_{\bar{\theta}_k}, \partial f(\bar{\theta}_k))\leq 2\kappa K d_{\mathcal{R}}(\bar{\theta}_k, \theta_{k-1}). \end{equation*} Combining the above inequality with \eqref{T-1}, one has \begin{align}\label{T-2} \begin{split} {\rm dist}^2(0_{\bar{\theta}_k}, \partial f(\bar{\theta}_k))&\leq4\kappa^2K^2d_{\mathcal{R}}^2(\bar{\theta}_k, \theta_{k-1})\\ &\leq8\kappa_{\max}K^2\big(f(\theta_{k-1}-f(\theta_k)\big). \end{split} \end{align} Summing $j=1$ to $N,$ we can conclude \begin{align*} \min_{j=1, ..., N}\Big\{{\rm dist}^2\big(0_{\bar{\theta}_j}, \partial f(\bar{\theta}_j)\big)\Big\}&\leq\frac{8\kappa_{\max}K^2}{N}\sum^N_{j=1}\big(f(\theta_{j-1})-f(\theta_j)\big)\\ &\leq\frac{8\kappa_{\max}K^2}{N}\big(f(\theta_0)-f^{*}\big). \end{align*} Fix an $v_k\in\partial h_{\kappa}(\Tilde{\theta}_k, \vartheta_k).$ Since the function $f$ is $\kappa_{cvx}(K_1^4K_2^4-R_1)$-strongly convex, the function $h_{\kappa_{cvx}}$ is $\kappa_{cvx} K_1^4K_2^4$-strongly convex. \begin{align*} f(\theta)&+\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\theta, \vartheta_k)\\ & \geq f(\Tilde{\theta}_k)+\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)\\ & \quad\quad+\frac{\kappa_{cvx} K_1^4K_2^4}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \theta)+\langle v_k, \mathcal{R}_{\Tilde{\theta}_k}^{-1}\theta\rangle. \end{align*} Then \begin{align*} f(\Tilde{\theta}_k) &\leq f(\theta)+\frac{\kappa_{cvx}}{2}\big(d_{\mathcal{R}}^2(\theta, \vartheta_k)-K_1^4K_2^4 d_{\mathcal{R}}^2(\Tilde{\theta}_k, \theta)\\ &\quad\quad -d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)\big)-\langle v_k, \mathcal{R}_{\Tilde{\theta}_k}^{-1}\theta\rangle. \end{align*} So for any $\theta\in\mathcal{M}$ \begin{equation*} \begin{split} f(\theta_k)\leq& f(\Tilde{\theta}_k)\\ \leq& f(\theta)+\frac{\kappa_{cvx}}{2}\Big(K_1^2\|\mathcal{R}^{-1}_{\theta_{k-1}}\theta-\mathcal{R}^{-1}_{\theta_{k-1}}\vartheta_{k}\|^2- K_1^4K_2^2\|\mathcal{R}^{-1}_{\theta_{k-1}}\Tilde{\theta}_k\\ &-\mathcal{R}^{-1}_{\theta_{k-1}}\theta\|^2 \Big)-\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)-\langle v_k, \mathcal{R}_{\Tilde{\theta}_k}^{-1}\theta\rangle. \end{split} \end{equation*} We substitute $\theta=\mathcal{R}_{\theta_{k-1}}\alpha_k\mathcal{R}^{-1}_{\theta_{k-1}}\theta^{*},$ where $\theta^{*}$ is any minimizer of $f.$ Using convexity of $f$ \begin{equation*} f(x)\leq\alpha_k f(\theta^{*})+(1-\alpha_k)f(\theta_k), \end{equation*} the stopping criteria , \begin{equation} \label{C2} {\rm dist}\big(0_{\Tilde{\theta}_k}, \partial h_{\kappa_{cvx}}(\Tilde{\theta}_k, \vartheta_k)\big)<\frac{\kappa_{cvx}}{k+1}d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k), \end{equation} i.e. $\|v_k\|<\frac{\kappa_{cvx}}{k+1}d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k),$ and $\vartheta_k=\mathcal{R}_{\theta_{k-1}}\alpha_{k}\mathcal{R}^{-1}_{\theta_{k-1}}\Tilde{\vartheta}_{k-1},$ and $\Tilde{\vartheta}_k=\mathcal{R}_{\theta_{k-1}}\frac{1}{\alpha_k}\mathcal{R}^{-1}_{\theta_{k-1}}\Tilde{\theta}_k,$ one has \begin{equation*} \begin{split} f(\theta_k)\leq&\alpha_k f(\theta^{*})+(1-\alpha_k)f(\theta_k)\\ &+\frac{\kappa_{cvx}\alpha_k^2}{2}\Big(K_1^2\|\mathcal{R}^{-1}_{\theta_{k-1}}\theta^*-\mathcal{R}^{-1}_{\theta_{k-1}}\Tilde{\vartheta}_{k-1}\|^2\\ &-K_1^4K_2^2\|\mathcal{R}^{-1}_{\theta_{k-1}}\Tilde{\vartheta}_k-\mathcal{R}^{-1}_{\theta_{k-1}}\theta^*\|^2\Big)\\ &-\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)+\frac{\kappa_{cvx}}{k+1}d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k) \|\mathcal{R}_{\tilde{\theta}_k}^{-1}\theta\|\\ \leq&\alpha_k f(\theta^{*})+(1-\alpha_k)f(\theta_k)\\ &+\frac{\kappa_{cvx}\alpha_k^2}{2}\Big(K_1^2K_2^2d_{\mathcal{R}}^2(\theta^*, \Tilde{\vartheta}_{k-1})-K_1^2K_2^2d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k)\Big)\\ &-\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)+\frac{\kappa_{cvx}}{k+1}d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k)d_{\mathcal{R}}(\tilde{\theta}_k, \theta)\\ \leq&\alpha_k f(\theta^{*})+(1-\alpha_k)f(\theta_k)\\ &+\frac{\kappa_{cvx}\alpha_k^2}{2}\Big(K_1^2K_2^2d_{\mathcal{R}}^2(\theta^*, \Tilde{\vartheta}_{k-1})-K_1^2K_2^2d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k)\Big)\\ &-\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)\\ &+\frac{\kappa_{cvx}K_1}{k+1}d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k)\|\mathcal{R}_{\theta_{k-1}}^{-1}\tilde{\theta}_k-\mathcal{R}_{\theta_{k-1}}^{-1}\theta\|\\ =&\alpha_k f(\theta^{*})+(1-\alpha_k)f(\theta_k)\\ &+\frac{\kappa_{cvx}\alpha_k^2}{2}\Big(K_1^2K_2^2d_{\mathcal{R}}^2(\theta^*, \Tilde{\vartheta}_{k-1})-K_1^2K_2^2d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k)\Big)\\ &-\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)\\ &+\frac{\kappa_{cvx}\alpha_kK_1}{k+1}d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k)\|\mathcal{R}_{\theta_{k-1}}^{-1}\tilde{\vartheta}_k-\mathcal{R}_{\theta_{k-1}}^{-1}\theta^{*}\|\\ \leq&\alpha_k f(\theta^{*})+(1-\alpha_k)f(\theta_k)\\ &+\frac{\kappa_{cvx}\alpha_k^2}{2}\Big(K_1^2K_2^2d_{\mathcal{R}}^2(\theta^*, \Tilde{\vartheta}_{k-1})-K_1^2K_2^2d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k)\Big)\\ &-\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)\\ &+\frac{\kappa_{cvx}\alpha_kK_1K_2}{k+1}d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k)d_{\mathcal{R}}(\tilde{\vartheta}_k, \theta^{*}). \end{split} \end{equation*} So \begin{equation}\label{T-3} \begin{split} f(\theta_k)\leq&\alpha_k f(\theta^{*})+(1-\alpha_k)f(\theta_k)\\ &+\frac{\kappa_{cvx}\alpha_k^2}{2}\Big(K_1^2K_2^2d_{\mathcal{R}}^2(\theta^*, \Tilde{\vartheta}_{k-1})-K_1^2K_2^2d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k)\Big)\\ &-\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)\\ &+\frac{\kappa_{cvx}\alpha_kK_1K_2}{k+1}d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k)d_{\mathcal{R}}(\theta^{*}, \Tilde{\vartheta}_k). \end{split} \end{equation} Set $\mu_k=\frac{1}{k+1}.$ Completing the square yields \begin{align*} -\frac{\kappa_{cvx}}{2}d_{\mathcal{R}}^2(\Tilde{\theta}_k, \vartheta_k)+\kappa_{cvx}\alpha_k\mu_kK_1K_2 d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k)d_{\mathcal{R}}(\theta^{*}, \Tilde{\vartheta}_k)\\ \leq \frac{K_1^2K_2^2\kappa_{cvx}\alpha_k^2\mu_k^2}{2}d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k), \end{align*} and subtracting $f^{*}=f(\theta^{*})$ from both sides, we obtain \begin{equation*} \begin{split} f(\theta_k)-f^{*}&\leq(1-\alpha_k)(f(\theta_{k-1})-f^{*})+\frac{\kappa_{cvx}\alpha_k^2}{2}\Big(K_1^2K_2^2d_{\mathcal{R}}^2(\theta^*, \Tilde{\vartheta}_{k-1})\\ &\quad-K_1^2K_2^2d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k)\Big)+\frac{K_1^2K_2^2\kappa_{cvx}\alpha_k^2\mu_k^2}{2}d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k)\\ &=(1-\alpha)(f(\theta_{k-1})-f^{*})+\frac{\kappa_{cvx}\alpha_k^2K_1^2K_2^2}{2}d_{\mathcal{R}}^2(\theta^*, \Tilde{\vartheta}_{k-1})\\ &\quad-\frac{\kappa_{cvx}\alpha_k^2K_1^2K_2^2}{2}(1-\mu_k^2)d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k). \end{split} \end{equation*} So one can obtain \begin{align*} \frac{f(\theta_k)-f^{*}}{\alpha_k^2}+\frac{\kappa_{cvx} K_1^2K_2^2}{2}(1-\mu_k^2)d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k)\\ \leq\frac{1-\alpha_k}{\alpha_k^2}(f(\theta_{k-1})-f^{*})+\frac{\kappa_{cvx} K_1^2K_2^2}{2}d_{\mathcal{R}}^2(\theta^*, \Tilde{\vartheta}_{k-1}). \end{align*} Denote $A_k=(1-\mu_k^2).$ Using the equality $\frac{1-\alpha_k}{\alpha_k^2}=\frac{1}{\alpha_{k-1}^2}$ we derive the following recursion \begin{equation*} \begin{split} &\frac{f(\theta_k)-f^{*}}{\alpha_k^2}+\frac{\kappa_{cvx} K_1^2K_2^2 A_k}{2}d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k) \\ &\leq\frac{1-\alpha_k}{\alpha_k^2}(f(\theta_{k-1})-f^{*})+\frac{\kappa_{cvx} K_1^2K_2^2}{2}d_{\mathcal{R}}^2(\theta^*, \Tilde{\vartheta}_{k-1})\\ &=\frac{f(\theta_{k-1})-f^{*}}{\alpha_{k-1}^2}+\frac{\kappa_{cvx} K_1^2K_2^2}{2}d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_{k-1})\\ &\leq\frac{f(\theta_{k-1})-f^{*}}{A_{k-1}\alpha_{k-1}^2}+\frac{\kappa_{cvx} K_1^2K_2^2}{2}d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_{k-1})\\ &=\frac{1}{A_{k-1}}\Bigg(\frac{f(\theta_{k-1})-f^{*}}{\alpha_{k-1}^2}+\frac{\kappa_{cvx} K_1^2K_2^2A_{k-1}}{2}d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_{k-1})\Bigg). \end{split} \end{equation*} The last inequality holds because $0<A_k\leq1.$ Iterating $N$ times, we deduce \begin{align*} \frac{f(\theta_N)-f^{*}}{\alpha_N^2}&\leq\frac{f(\theta_N)-f^{*}}{\alpha_N^2}+\frac{\kappa_{cvx} K_1^2K_2^2 A_k}{2}d_{\mathcal{R}}^2(\theta^{*}, \Tilde{\vartheta}_k)\\ &\leq\frac{\kappa_{cvx} K_1^2K_2^2}{2}d_{\mathcal{R}}^2(\theta^{*}, \theta_0)\prod^N_{k=2}\frac{1}{A_{k-1}}. \end{align*} Note that \begin{equation*} \prod^{N}_{k=2}\frac{1}{A_{k-1}}\leq2; \end{equation*} thereby with inequality from Lemma 1 we conclude \begin{align*} f(\theta_N)-f^{*}&\leq\frac{\alpha_N^2\kappa_{cvx} K_1^2K_2^{2}}{2}d_{\mathcal{R}}^2(\theta^{*}, \theta_0)\prod^N_{k=2}\frac{1}{A_{k-1}}\\ &\leq\alpha_N^2\kappa_{cvx} K_1^2K_2^2 d_{\mathcal{R}}^2(\theta^{*}, \theta_0)\\ &\leq\frac{4\kappa_{cvx} K_1^2K_2^2}{(N+1)^2}d_{\mathcal{R}}^2(\theta^{*}, \theta_0). \end{align*} Hence \begin{equation*} f(\theta_N)-f^{*}\leq\frac{4\kappa_{cvx} K_1^2K_2^2}{(N+1)^2}d_{\mathcal{R}}^2(\theta^{*}, \theta_0). \end{equation*} \end{proof} \subsection{Strong convexity of the objective function in estimating the intrinsic Fr\'echet means on the sphere} We provide a proof that the objective functions in estimating both the intrinsic and extrinsic Fr\'echet means on the sphere in Section 4 is strongly convex. \begin{proof} In order to prove the strong-convexity of the intrinsic mean on the sphere $S^n$, we will prove the strong-convexity of the square intrinsic distance function from the point $x_0\in S^n$ \begin{equation*} d_g^2(x_0, x)=\arccos^2(x_0^Tx). \end{equation*} So for the geodesic from the point $x_1\in S^n$ to the point $x_2\in S^2$ \begin{equation*} \begin{split} \gamma(\lambda)&=\exp_{x_1}\lambda\log_{x_1}x_2\\ &=\cos\big(\lambda\arccos(x_1^Tx_2)\big)x_1\\ &\;\;+\sin\big(\lambda\arccos(x_1^Tx_2)\big)\frac{x_2-(x_1^Tx_2)x_1}{\sqrt{1-(x_1^Tx_2)^2}}, \end{split} \end{equation*} we need to show following inequality \begin{align*} d_g^2(x_0, \gamma(\lambda))&\leq(1-\lambda)d_g^2(x_0, x_1)+\lambda d_g^2(x_0, x_2)\\ &\;\;-\frac{\lambda(1-\lambda)\mu}{2}d_g^2(x_1, x_2). \end{align*} For the sake of briefness let's use the following notations \begin{gather*} d_1=\arccos(x_0^Tx_1), \qquad d_2=\arccos(x_0^Tx_2), \\ d_2=\arccos(x_1^Tx_2). \end{gather*} Therefore we have to prove the following inequality \begin{multline} \label{strong-con} \arccos^2\bigg(\cos(\lambda d_3)\cos(d_1)\\ +\sin(\lambda d_3)\frac{\cos(d_2)-\cos(d_3)\cos(d_1)}{\sin(d_3)}\bigg)\\ \leq\quad(1-\lambda)d_1^2+\lambda d_2^2-\frac{\lambda(1-\lambda)\mu}{2}d_3^2. \end{multline} Or we should prove the inequality \begin{align*} \arccos^2(x_0^Tx_2)&>\arccos^2(x_0^Tx_1)-2(\log_{x_1}x_0)^T\log_{x_1}x_2\\ &\;\;+\frac{\mu}{2}\arccos^2(x_1^Tx_2) \end{align*} \begin{align*} d_2^2&>d_1^2-2\bigg(d_1\frac{x_0-\cos(d_1)x_1}{\sin(d_1)}\bigg)^T\bigg(d_3\frac{x_2-\cos(d_3)x_1}{\sin(d_3)}\bigg)+\frac{\mu}{2}d_3^2\\ &=d_1^2-2d_1d_3\frac{\cos(d_2)-\cos(d_3)\cos(d_1)}{\sin(d_1)\sin(d_3)}+\frac{\mu}{2}d_3^2 \end{align*} The last inequality was checked to hold in Wolfram Mathematica for $d_1,d_2\in[0, \pi/4]$ and $d_3\in\Big[|d_1-d_2|, d_1+d_2\Big],$ where $\mu=1$. In order to proof the strong-convexity of Fr\'echet function in estimating extrinsic mean on the sphere $S^n$, we will prove the strong-convexity of the square extrinsic distance function from the point $x_0\in S^n$ \begin{equation*} d_e^2(x_0, x)=2(1-x_0^Tx). \end{equation*} So for the geodesic from the point $x_1\in S^n$ to the point $x_2\in S^2$ \begin{equation*} \begin{split} \gamma(\lambda)&=\exp_{x_1}\lambda\log_{x_1}x_2\\ &=\cos\big(\lambda\arccos(x_1^Tx_2)\big)x_1+\sin\big(\lambda\arccos(x_1^Tx_2)\big)\frac{x_2-(x_1^Tx_2)x_1}{\sqrt{1-(x_1^Tx_2)^2}}, \end{split} \end{equation*} we need to show that \begin{align*} d_e^2(x_0, \gamma(\lambda))&\leq(1-\lambda)d_e^2(x_0, x_1)+\lambda d_e^2(x_0, x_2)\\ &\;\;-\frac{\lambda(1-\lambda)\mu}{2}d_g^2(x_1, x_2). \end{align*} Therefore we have to prove \begin{multline}\label{strong-con} 2\bigg(1-\cos(\lambda d_3)\cos(d_1)-\sin(\lambda d_3)\frac{\cos(d_2)-\cos(d_3)\cos(d_1)}{\sin(d_3)}\bigg)\\ \leq\quad2-2\big((1-\lambda)\cos(d_1)+\lambda\cos( d_2)\big)-\frac{\lambda(1-\lambda)\mu}{2}d_3^2. \end{multline} Or we need to show \begin{multline*} 2(1-x_0^Tx_2)>\\ 2(1-x_0^Tx_1)-2\big(x_0-(x_0^Tx_1)x_1\big)^T\log_{x_1}x_2+\frac{\mu}{2}\arccos^2(x_1^Tx_2) \end{multline*} Thus \begin{align*} &2\big(1-\cos(d_2)\big)\\ &\quad\;>2\big(1-\cos(d_1)\big)- 2\big(x_0-\cos(d_1)x_1\big)^T\bigg(d_3\frac{x_2-\cos(d_3)x_1}{\sin(d_3)}\bigg)+\frac{\mu}{2}d_3^2\\ &\quad=2\big(1-\cos(d_1)\big)-2d_3\frac{\cos(d_2)-\cos(d_3)\cos(d_1)}{\sin(d_3)}+\frac{\mu}{2}d_3^2 \end{align*} The last inequality was verified Wolfram Mathematica for $d_1,d_2\in[0, \pi/4]$ and $d_3\in\Big[|d_1-d_2|, d_1+d_2\Big],$ where $\mu=1$ \end{proof} \section{Accelerated algorithms for optimization on manifolds} \label{sec: proposed} \subsection{Weakly convex functions on manifolds with respect to retraction mapping} We first define general retraction-based, weakly convex, convex, and strongly convex functions by generalizing from their geodesic-based counterparts . We then prove an important proposition that can transform a non-convex function into a convex one simply by adding a multiple of the squared retraction-distance to the objective function. \begin{definition} A {\it retraction} on a manifold $\mathcal{M}$ is a smooth mapping from its tangent bundle $\mathcal{R}: T\mathcal{M}\rightarrow\mathcal{M}$ with the following properties: \begin{enumerate} \item $\mathcal{R}_{\theta}(0_{\theta})=\mathcal{R}(\theta, 0_{\theta})=\theta,$ where $0_{\theta}$ denotes the zero vector on the tangent space $T_{\theta}\mathcal{M};$ \item For any point $\theta\in\mathcal R$ the differential $d(\mathcal{R}_{\theta})$ of the retraction mapping at the zero vector $0_{\theta}\in T_{\theta}\mathcal M$ has to be equal to the identity mapping on $T_{\theta}\mathcal{M},$ that is $d(\mathcal{R}_{\theta}(0_{\theta}))=d\big(\mathcal{R}(\theta, 0_{\theta})\big)={\rm id}_{T_{\theta}\mathcal{M}},$ where ${\rm id}_{T_{\theta}\mathcal{M}}$ denotes the identity mapping on $T_{\theta}\mathcal{M}.$ \end{enumerate} The exponential map on a Riemannian manifold can be viewed as a special case of the retraction map, and the inverse-exponential map is a special case of the inverse-retraction map. A good choice of retraction map can lead to substantial reduction in computation burden compared to the exponential map. We see an example in Section~\ref{subsec:netflix} on the choice of a retraction map for Grassmannian; Figure~\ref{fig-ret} provides a visualization of a retraction map. \begin{figure} \center \includegraphics[width=.5\linewidth]{test1_gray.png} \caption{Illustration of a retraction map on a manifold} \label{fig-ret} \end{figure} \end{definition} We first define the retraction distance function on $\mathcal M$ \begin{equation*} d_{\mathcal{R}}(\theta_0, \theta)=\|\mathcal{R}_{\theta_0}^{-1}\theta\|. \end{equation*} Since at zero the differential of the retraction map is the identity, there is a small enough neighborhood $D$ of the point $\theta$ where the inverse retraction map $\mathcal{R}_{\theta}^{-1}$ is bi-Lipschitz continuous in $D,$ i.e. $d_{\mathcal{R}}$ satisfies inequalities \begin{equation*} \frac{1}{K_1}d_{\mathcal{R}}(\vartheta_1, \vartheta_2)\leq\|\mathcal{R}_{\theta}^{-1}\vartheta_1-\mathcal{R}_{\theta}^{-1}\vartheta_2\|\leq K_2d_{\mathcal{R}}(\vartheta_1, \vartheta_2), \end{equation*} where $\vartheta_1, \vartheta_2\in D,$ and $K_1\geq1,$ and $K_2\geq1.$ In addition, we also require the squared retraction distance function to be {\it $2R_1$-strongly retraction convex} around $\vartheta$--that is, for some $\delta>0$ and constant $0\leq R_1\leq1$ the following inequality holds: \begin{equation}\label{dR-strongly-convex-2} d_{\mathcal{R}}^2(\theta_2, \vartheta)\geq d_{\mathcal{R}}^{2}(\theta_1, \vartheta)+\langle\nabla d_{\mathcal{R}}^2(\theta_1, \vartheta), \mathcal{R}_{\theta_1}^{-1}\theta_2\rangle+R_1d_{\mathcal{R}}^2(\theta_1, \theta_2), \end{equation} where $d_{\mathcal{R}}(\theta_i, \vartheta)<\delta,$ $i=1,2.$ Due to the fact that at the zero vector $0_{\vartheta}\in T_{\vartheta}\mathcal{M}$ the differential of $\mathcal{R}_{\vartheta}$ is equal to identity mapping, we can see that in a small neighborhood of $\vartheta$, the square retraction distance function behaves like the square normal function which is strongly convex. \begin{definition} Consider a function $f:\mathcal{M}\rightarrow\bar{\mathbb{R}}$ and a point $\theta$ with $f(\theta)$ finite. The {\it $\mathcal{R}-$subdifferential} of $f$ at $\theta$ is the set \begin{equation*} \begin{split} \partial f(\theta)=\Big\{v\in T_{\theta}\mathcal{M}: f(\vartheta)\geq f(\theta)+\langle v, \mathcal{R}^{-1}_{\theta}\vartheta\rangle+o\big(d_{\mathcal{R}}(\theta, \vartheta)\big)\\ \forall \vartheta\in \mathcal{M}\Big\}. \end{split} \end{equation*} \end{definition} We now define the notion of convex functions on manifolds with respect to the retraction map. \begin{definition} A function $f$ is {\it convex with respect to the retraction} $\mathcal{R}$ if for any points $\theta_1, \theta_2\in\mathcal{M}$ the inequality holds \begin{equation} \label{f-convex-2} f(\theta_2)\geq f(\theta_1)+\langle v, \mathcal{R}_{\theta_1}^{-1}\theta_2\rangle, \qquad v\in\partial f(\theta_1). \end{equation} \end{definition} Now we are ready to define one of the most important classes of non-convex functions called \emph{weakly-convex functions} which constitute many interesting applications of non-convex functions in machine learning. \begin{definition} A function $f$ is {\it $\rho$-weakly convex with respect to the retraction} $\mathcal{R}$ if for any points $\theta_1, \theta_2\in\mathcal{M}$ the inequality holds \begin{equation} \label{f-weakly-convex-2} f(\theta_2)\geq f(\theta_1)+\langle v, \mathcal{R}_{\theta_1}^{-1}\theta_2\rangle-\frac{\rho}{2}d_{\mathcal{R}}^2(\theta_1, \theta_2), \qquad v\in\partial f(\theta_1). \end{equation} \end{definition} Given the strong retraction convexity of the squared retraction distance (see \eqref{dR-strongly-convex-2}), we can regularize the weakly convex function $f$ by adding the term $\frac{\kappa}{2}d_{\mathcal{R}}^2(\theta, \vartheta)$ and turn it into a convex function through the following proposition. \begin{proposition} \label{prop1} Let $d_{\mathcal{R}}$ be a retraction distance that is strong-retraction convex or satisfies the inequality \eqref{dR-strongly-convex-2} in the subset $D\subset\mathcal{M}.$ Then the function $f$ is $R_1\kappa$-weakly convex in $D$ if and only if the function \begin{equation*}h_{\kappa}(\theta, \vartheta)=f(\theta)+\frac{\kappa}{2} d_{\mathcal{R}}^2(\theta, \vartheta)\end{equation*} is convex in $D.$ \end{proposition} \begin{proof} Let $f$ be $\rho$-weakly convex. Then for any $\theta_1, \theta_2\in D$ and any $\lambda\in[0, 1]$ \begin{align*} f(\theta_2)&\geq f(\theta_1)+\langle\partial f(\theta_1), \mathcal{R}_{\theta_1}^{-1}\theta_2\rangle-\frac{R_1\kappa}{2}d_{\mathcal{R}}^2(\theta_1, \theta_2)\\ &\geq f(\theta_1)+\langle\partial f(\theta_1), \mathcal{R}_{\theta_1}^{-1}\theta_2\rangle+ \frac{\kappa}{2}d_{\mathcal{R}}^{2}(\theta_1, \vartheta)\\ & \quad\quad +\langle\nabla \frac{\kappa}{2}d_{\mathcal{R}}^2(\theta_1, \vartheta), \mathcal{R}_{\theta_1}^{-1}\theta_2\rangle-\frac{\kappa}{2}d_{\mathcal{R}}^2(\theta_2, \vartheta), \end{align*} which implies \begin{equation*} h_{\kappa}(\theta_2, \vartheta)\geq h_{\kappa}(\theta_1, \vartheta)+\langle\partial h_{\kappa}(\theta_1, \vartheta), \mathcal{R}_{\theta_1}^{-1}\theta_2\rangle. \end{equation*} \end{proof} For functions defined on an Euclidean space we have a definition of a weakly convex function that is equivalent to \eqref{f-weakly-convex-2}: \begin{equation} \label{f-weakly-convex-0} f\big(\mathcal{R}_{\vartheta}(\lambda\mathcal{R}^{-1}_{\vartheta}\theta)\big)\leq\lambda f(\theta)+(1-\lambda)f(\vartheta)+\frac{\rho\lambda(1-\lambda)}{2}d_{\mathcal{R}}^2(\theta, \vartheta). \end{equation} Over the manifold, however, there is no such straightforward equivalence. This is due to the distance function $d_{\mathcal{R}}^2(\vartheta, \theta)$ which does not satisfy the following equality: \begin{align} \label{dR-equality} d_{\mathcal{R}}^2(\mathcal{R}_{\theta_1}\lambda\mathcal{R}_{\theta_1}^{-1}\theta_2, \vartheta)&\nonumber=\lambda d_{\mathcal{R}}(\theta_2, \vartheta)+(1-\lambda)d_{\mathcal{R}}^2(\theta_1, \vartheta)\\ &\quad\quad-\lambda(1-\lambda)d_{\mathcal{R}}^2(\theta_1, \theta_2). \end{align} Nevertheless in some neighborhood of $\vartheta,$, for some $\delta>0$, the following inequality holds \begin{equation}\label{dR-strongly-convex-1} \begin{split} d_{\mathcal{R}}^2(\mathcal{R}_{\theta_1}\lambda\mathcal{R}_{\theta_1}^{-1}\theta_2, \vartheta)\leq\lambda d_{\mathcal{R}}^2(\theta_2, \vartheta) +(1-\lambda)d_{\mathcal{R}}^2(\theta_1, \vartheta)\\ -\lambda(1-\lambda)R_1 d_{\mathcal{R}}^2(\theta_1, \theta_2), \end{split} \end{equation} where $d_{\mathcal{R}}(\theta_1, \vartheta)<\delta$ and $d_{\mathcal{R}}(\theta_2, \vartheta)<\delta$. Therefore the function $f$ is {\it $\rho$-weakly convex with respect to the retraction} $\mathcal{R}$ if for any points $\theta, \vartheta\in\mathcal{M}$ such that $\lambda\in[0, 1],$ the approximate secant inequality holds \begin{equation*}\label{f-weakly-convex-1} f\big(\mathcal{R}_{\vartheta}(\lambda\mathcal{R}^{-1}_{\vartheta}\theta)\big)\leq\lambda f(\theta)+(1-\lambda)f(\vartheta)+ \frac{\rho\lambda(1-\lambda)}{2}d_{\mathcal{R}}^2(\theta, \vartheta), \end{equation*} where $d_{\mathcal{R}}(\theta, \vartheta)<\delta.$ \subsection{The acceleration algorithm on manifolds} In this section, we propose our acceleration algorithms for convex and non-convex functions on manifolds. We first minimize the convex subproblem of an objective function $f$ for some existing approach $\mathcal{A}$ (such as a gradient descent algorithm) where the objective function is written as \begin{equation*} h_{*}(\vartheta)=\min_{\theta\in \mathcal{M}}\left\{f(\theta)+\frac{\kappa}{2}d_{\mathcal{R}}^2(\theta, \vartheta)\right\}, \end{equation*} with a positive regularization parameter $\kappa$. Proposition \ref{prop1} ensures the convexity of the subproblem for an appropriate level of regularization. Therefore, with an existing approach $\mathcal{A}$, we define the {\it proximal operator} \begin{equation*} p(\vartheta)={\rm prox}_{f/\kappa}(\vartheta)=\arg\min_{\theta\in \mathcal{M}}\Big\{f(\theta)+\frac{\kappa}{2}d_{\mathcal{R}}^{2}(\theta, \vartheta)\Big\}, \end{equation*} where $\vartheta$ is a {\it prox-center.} To consider optimizing $p(\vartheta)$, we focus on $\mathcal A$ having linear convergence rates. Specifically, a minimization algorithm $\mathcal{A},$ generating the sequence of iterates $(\theta_k)_{k\geq0},$ has a {\it linear convergence rate} if there exists $\tau_{\mathcal{A}, f}\in (0, 1)$ and a constant $C_{\mathcal{A}, f}\in\mathbb{R}$ such that \begin{equation*} f(x_k)-f_{*}\leq C_{\mathcal{A}, f}(1-\tau_{\mathcal{A}, f})^k, \end{equation*} where $f_{*}$ is the minimum value of $f.$ There are multiple optimization algorithms on manifolds with linear convergence rates for strongly-convex functions on manifold. These include gradient descent, conjugate gradient descent, MASAGA \citep{Babanezhad2018MASAGAAL}, RSVRG \citep{Zhang:2016:RSF}, and many others. For a proximal center $\vartheta$ and a smoothing parameter $\kappa$, we let \begin{align*} h_{\kappa}(\theta, \vartheta)=f(\theta)+\frac{\kappa}{2}d_{\mathcal{R}}^2(\theta, \vartheta). \end{align*} At the $k$-th iteration, given a previous iterate $\theta_{k-1}$ and the extrapolation term $\tilde{\vartheta}_{k-1},$ we perform the following steps: \begin{enumerate} \item {\bf Proximal point step.} \begin{align*}\bar{\theta}_k\approx\arg\min_{\theta\in \mathcal{M}}h_{\kappa}(\theta, \theta_{k-1}). \end{align*} \item {\bf Accelerated proximal point step.} \begin{align*}\vartheta_k=\mathcal{R}_{\theta_{k-1}}\left(\alpha_k\mathcal{R}_{\theta_{k-1}}^{-1}\tilde{\vartheta}_{k-1}\right), &\qquad \tilde{\theta}_k\approx\arg\min_{\theta\in\mathcal{M}}h_{\kappa}(\theta, \vartheta_k), \\ \tilde{\vartheta}_k=\mathcal{R}_{\theta_{k-1}}\left(\frac{1}{\alpha_k}\mathcal{R}_{\theta_{k-1}}^{-1}\tilde{\theta}_k\right), &\qquad \frac{1-\alpha_{k+1}}{\alpha_{k+1}^2}=\frac{1}{\alpha_k^2}. \end{align*} \end{enumerate} One needs a stopping criterion, since we cannot use the functional gap as a stopping criterion here as in the convex case. A stationarity stopping criterion is adopted which consists of two conditions: \begin{itemize} \item {\bf Descent condition} $h_{\kappa}(\theta, \vartheta) \leq h_{\kappa}(\vartheta, \vartheta);$ \item {\bf Adaptive stationary condition} ${\rm dist} \big(0_{\theta},\partial_{\theta} h_{\kappa}(\theta, \vartheta)\big) < \kappa d_{\mathcal{R}}(\theta, \vartheta).$ \end{itemize} Here, ${\rm dist}(\cdot, \cdot)$ denotes the standard Euclidean distance on the tangent space. Recall that a quadratic of the retraction distance is added to $f$ to make the subproblem convex. So if the weak-convexity parameter $\rho$ is known, then one should set $\kappa>\rho$ to make the problem convex. In this case, it is proven that the number of inner calls to $\mathcal{A}$ for the subproblems \begin{equation}\label{SubProblem} \min_{\vartheta\in\mathcal M}h_{\kappa}(\vartheta, \theta) \end{equation} can be bounded by proper initialization point $\vartheta_0:$ \begin{itemize} \item if $f$ is smooth, then set $\vartheta_0=\theta;$ \item if $f=f_0+\psi,$ where $f_0$ is $L$-smooth, then set $\vartheta_0={\rm prox}_{\eta\psi}\big(\mathcal{R}_{\theta}\left(\eta\nabla f_0(\theta)\right)\big)$ with $\eta\leq\frac{1}{L+\kappa}.$ \end{itemize} However, in general one does not have knowledge of $\rho.$ Thus we propose a method that allows algorithm $\mathcal{A}_1$ (Algorithm~\ref{adapt}) to handle the convexity problem adaptively. Our idea is to let $\mathcal{A}$ run on the subproblem for $T$ predefined iterations, output the point $\bar{\theta}_T,$ and check if a sufficient decrease occurs. If the subproblem is convex, then the aforementioned descent and adaptive stationary conditions are guaranteed. If either of the conditions are violated, then the subproblem is deemed non-convex. In this case, we double the value $\kappa$ and repeat the previous steps The tuning parameter $\kappa$ should be chosen big enough to ensure the convexity of the subproblems and simultaneously small enough to obtain the optimal complexity by not letting the subproblem deviate too far away from the original objective function. Thus we introduce $\kappa_{cvx}$ as an $\mathcal{A}$- {\it dependent smoothing parameter.} Notice that the linear convergence rate $\tau_{\mathcal{A}, h_{\kappa}}$ of $\mathcal A$ is independent of the prox-center and varies with $\kappa$. We define $\kappa_{cvx}$ as \begin{equation*} \kappa_{cvx}=\arg\max_{\kappa>0}\frac{\tau_{\mathcal{A}, h_{\kappa}}}{\sqrt{L+\kappa}}. \end{equation*} \begin{algorithm} \caption{$\mathcal{A}_1$: The Adaptation Algorithm on Manifolds } \label{adapt} {\bf input} the point $\theta\in \mathcal{M},$ the smoothing parameter $\kappa$ and the number of iterations $T$\\ \Repeat { \rm $h_{\kappa}(\bar{\theta}_T, \theta)<h_{\kappa}(\theta, \theta)$ and ${\rm dist}(\partial h_{\kappa}(\theta_T, \theta), 0_{\theta_T})<\kappa d_{\mathcal{R}}(\theta_T, \theta)$} {Compute \begin{equation*} \bar{\theta}_{T}\approx\arg\min_{\vartheta\in\mathcal{M}} h_{\kappa}(\vartheta, \theta) \end{equation*} by running $T$ iterations of $\mathcal{A}$, using the initialization strategy described below Equation~\eqref{SubProblem}.\\ {\bf If} $h_{\kappa}(\bar{\theta}_T, \theta)>h_{\kappa}(\theta, \theta)$ or ${\rm dist}(\partial h_{\kappa}(\theta_T, \theta), 0_{\theta_T})>\kappa d_{\mathcal{R}}(\theta_T, \theta)$\\ {\bf then} go to repeat by replacing $\kappa$ with $2\kappa.$\\ } {\bf output} $(\theta_T, \kappa)$ \end{algorithm} Finally, for an initial estimate $\theta_0\in \mathcal{M},$ smoothing parameters $\kappa_0, \kappa_{cvx},$ an optimization algorithm $\mathcal{A}$, and a stopping criterion based on a fixed budget $T$ and $S$, we have the following acceleration algorithm, $\mathcal A_2$, for the manifold (Algorithm \ref{Catlalyst2}). \begin{algorithm} \caption{$\mathcal{A}_2$: Acceleration Algorithm on Manifolds } \label{Catlalyst2} Initialize $\Tilde{\vartheta}_0=\theta_0,$ $\alpha=1.$\\ \Repeat{\rm the stopping criterion is \ \ \ ${\rm dist}(\partial f(\bar{\theta}_k), 0_{\bar{\theta}})<\varepsilon$} {for $k=1, 2, ...$ \begin{enumerate} \item compute $(\bar{\theta}_k, \kappa_k)=\mathcal{A}_1(\theta_{k-1}, \kappa_{k-1}, T)$ \item compute $\vartheta_k=\mathcal{R}_{\theta_{k-1}}\left(\alpha_k\mathcal{R}^{-1}_{\theta_{k-1}}\Tilde{\vartheta}_{k-1}\right)$ and apply $S_k\log(k+1)$ iterations of $\mathcal{A}_1$ to find \begin{equation*} \Tilde{\theta}_k\approx\arg\min_{\theta\in\mathcal{M}} h_{\kappa_{cvx}}(\theta, \vartheta_k), \end{equation*} by using initialization strategy described below \eqref{SubProblem}. \item Update $\Tilde{\vartheta}_k$ and $\alpha_{k+1}$: \begin{align*} \tilde{\vartheta}_k&=\mathcal{R}_{\theta_{k-1}}\left(\frac{1}{\alpha_{k}}\mathcal{R}^{-1}_{\theta_{k-1}}\Tilde{\theta}_k\right),\\ \alpha_{k+1}&=\frac{\sqrt{\alpha_k^4+4\alpha_k^2}-\alpha_k^2}{2}. \end{align*} \item Choose $\theta_k$ to be any point satisfying $f(\theta_k)=\min\{f(\bar{\theta}_k), f(\Tilde{\theta}_k)\}.$ \end{enumerate} } \end{algorithm} \begin{remark} Note that there are two sequences $\left\{\tilde{\theta}_k\right\}$ and $\left\{\bar{\theta}_k\right\}$ in Algorithm $\mathcal A_2$. Since the extrapolation step is designed for the convex case, the second sequence $\{\tilde{\theta}_k\}$ approximates the optimal point with accelerated rate which means that it approaches the optimal point faster than the first sequence $\left\{\bar{\theta}_k\right\}$ above. Intuitively, when the first sequence is chosen it uses the initial algorithm $\mathcal A$ and adapts the smoothing parameter to our objective--implying that the Nesterov step failed to accelerate convergence. \end{remark} In the adaptation method $\mathcal{A}_1(\theta_{k-1}, \kappa_{k-1}, T)$, the resulting $\bar{\theta}_k$ and $\kappa_k$ have to satisfy the following inequalities \begin{align}\label{C1} {\rm dist}\big(0_{\bar{\theta}_k}, \partial h(\bar{\theta}_k, \theta_{k-1})\big)&<\kappa_k d_{\mathcal{R}}(\bar{\theta}_k, \theta_{k-1}) \quad \text{and} \\ \quad h_{\kappa_k}(\bar{\theta}_k, \theta_{k-1})&\leq h_{\kappa_k}(\theta_{k-1}, \theta_{k-1}). \end{align} The resulting $\Tilde{\theta}_k,$ needs to satisfy the condition that if the function $f$ is convex, then \begin{equation} \label{C2} {\rm dist}\big(0_{\Tilde{\theta}_k}, \partial h_{\kappa_{cvx}}(\Tilde{\theta}_k, \vartheta_k)\big)<\frac{\kappa_{cvx}}{k+1}d_{\mathcal{R}}(\Tilde{\theta}_k, \vartheta_k). \end{equation} We then have the following lemma: \begin{lemma} \label{lem-2} Suppose $\theta$ satisfies ${\rm dist}(0_{\theta}, \partial h_{\kappa}(\theta, \vartheta))<\varepsilon,$ and $|\nabla d_{\mathcal{R}}^{2}(\theta, \vartheta)|\leq Kd_{\mathcal{R}}(\theta, \vartheta)$, then the inequality holds: \begin{equation*} {\rm dist}(0_{\theta}, \partial f(\theta))\leq\varepsilon+\kappa Kd_{\mathcal{R}}(\theta, \vartheta). \end{equation*} \end{lemma} \begin{proof} We can find $v\in\partial h_{\kappa}(\theta, \vartheta)$ with $\|v\|\leq\varepsilon.$ Taking into account $\partial h_{\kappa}(\theta, \vartheta)=\partial f(\theta)+\kappa\nabla d_ {\mathcal{R}}^{2}(\theta, \vartheta)$ the result follows. \end{proof} Since we assume retraction distance function $d_{\mathcal{R}}$ is continuous, we can deduce that the vector field $\nabla d_{\mathcal{R}}^{2}(\theta, \vartheta)$ is continuous, so the conditions of Lemma \ref{lem-2} are very mild. Also, as mentioned previously, the square retraction distance function $d_{\mathcal{R}}^2(\cdot, \vartheta)$ acts like a square normal function in a small neighborhood of $\vartheta$. We define the following retraction-based strongly convex function: \begin{definition} A function $f$ is {\it $\mu$-strongly convex with respect to the retraction} $\mathcal{R}$ if for any points $\theta_1, \theta_2\in\mathcal{M}$ and $\mu>0$ the inequality holds \begin{equation} \label{f-strongly-convex-2} f(\theta_2)\geq f(\theta_1)+\langle v, \mathcal{R}_{\theta_1}^{-1}\theta_2\rangle+\frac{\mu}{2}d_{\mathcal{R}}^2(\theta_1, \theta_2), \; v\in\partial f(\theta_1). \end{equation} \end{definition} Then we have the following convergence analysis for the acceleration algorithm $\mathcal{A}_2$: \begin{theorem} Fix real-valued constants $\kappa_0, \kappa_{cvx}>0$ and the point $\theta_0\in\mathcal{M}.$ Set $\kappa_{\max}=\max_{k\geq1}\kappa_k.$ Suppose that the number of iterations $T$ is such that $\bar{\theta}_k$ satisfies \eqref{C1}, and $|\nabla d_{\mathcal{R}}^{2}(\theta, \vartheta)|\leq Kd_{\mathcal{R}}(\theta, \vartheta).$ Define $f^{*}=\lim_{k\rightarrow\infty}f(\theta_k).$ Then for any $N\geq1,$ the iterated sequence generated by the acceleration algorithm satisfies \begin{equation*} \min_{j=1, ..., N}\Big\{{\rm dist}^2\big(0_{\bar{\theta}_j}, \partial f(\bar{\theta}_j)\big)\Big\}\leq\frac{8\kappa_{\max}K^2}{N}\left(f(\theta_0)-f^{*}\right). \end{equation*} If in addition the function $f$ is $\kappa_{cvx}(K_1^4K_2^4-R_1)$-strongly convex and $S_k$ is chosen so that $\Tilde{\theta}_k$ satisfies \eqref{C2}, then \begin{equation}\label{acceleration} f(\theta_N)-f^{*}\leq\frac{4\kappa_{cvx} K_1^2K_2^2}{(N+1)^2}d_{\mathcal{R}}^2(\theta^{*}, \theta_0), \end{equation} where $\theta^{*}$ is any minimizer of the function $f.$ \end{theorem} The detailed proof of this theorem can be found in the Appendix. \begin{remark} If the original method $\mathcal A$ has a linear rate of convergence then our method $\mathcal A_2$ also converges to the local minimum for the strongly convex case. If the knowledge of the strong-convexity is given, then some existing method can achieve optimal linear rate for smooth and convex functions \citep{pmlr-v49-zhang16b}, however, it is overall an extremely difficult to verify convexity of a function on a manifold, and our method adapts to that without requiring the knowledge of the convexity. Note that our algorithm also applies to the subgradient descent method, where instead of gradient of the function one takes the subdifferential, for non-smooth functions. In this case, for the strongly-convex objective, the subgradient method converges to the optimum with $O(1/N)$ rate of convergence (see \cite{pmlr-v49-zhang16b}). Thus our accelerated rate $O(1/N^2)$ can be considered optimal for strongly-convex functions on the manifold. \end{remark} \section{Related work} \label{sec:related} \cite{NIPS2017_7072} propose accelerated first-order methods for \emph{geodesically convex optimization} on Riemannian manifolds. This is a direct generalization of Nesterov's original linear extrapolation mechanism to general Riemannian manifolds via a non-linear operator. One drawback of \cite{NIPS2017_7072} is that the accelerated step of their algorithm involves exact solving of non-trivial implicit equations. \cite{pmlr-v75-zhang18a} later proposed a computationally tractable accelerated gradient algorithm and a novel estimation sequence for convergence analysis. Our approach is fundamentally different from theirs. We regularize an objective function with a squared retraction distance (see Proposition~\ref{prop1}), solve a sequence of convex subproblems, adapt to the degree of weak convexity of the objective function, and produce accelerated rates for convex objectives. Even in the convex case, our approach can deal with a much broader class of retraction-based convex functions. \cite{pmlr-v84-paquette18a} proposes a general scheme called ``Catalyst acceleration'' for solving general optimizations in Euclidean space, which has inspired development of some ideas for our work. Similar ideas have been explored for convex functions in Euclidean space in both theory and practice \citep{Lin:2017}. However, optimization problems on manifolds are of fundamentally different nature and require development of substantially new tools and theory. There is an interesting line of work on proposing fast algorithms for stochastic optimization on manifolds \citep[see][]{Zhang:2016:RSF, 2018arXiv181104194Z, pmlr-v89-zhou19a, 6487381} which employ very different techniques such as minibatching, variance reduction and utilizing the uncertainty of inputs. Methods like \cite{zhang2018towards} propose optimization methods that are analogous to Nesterov-type algorithms for manifold spaces. \section{Simulation study and data analysis}\label{sec-simu} To examine the convergence and acceleration rates of our proposed algorithm, we first apply our method to the estimation of both intrinsic and extrinsic Fr\'echet means on spheres, in which one has the exact optima for comparison in the case of extrinsic mean. We also apply our algorithm to the Netflix movie-ranking data set as an example of optimization over Grassmannian manifolds in the low-rank matrix completion problem. \subsection{Estimation of intrinsic Fr\'echet means on manifolds}\label{subsec:frechet} We first consider the estimation problem of Fr\'echet means on manifolds \citep{frechet}. In this simple example, we have observations $\{x_1,\ldots, x_N\}$ that lie on a sphere $ \mathbb{S}^{d} $ and our goal is to estimate the sample mean: \begin{align} \hat\theta=\arg\min_{\theta\in S^d} f(\theta), \hspace{1em} f(\theta) = \sum_{i=1}^n \rho^2(\theta, x_i). \end{align} If $\rho$ is the embedded distance metric in the Euclidean space, then there exists a closed form solution $\hat\theta=\sum_{i=1}^N x_i/\| \sum_{i=1}^N x_i\|$, which is the projection of the Euclidean mean $\bar{x}$ onto the sphere \citep{rabibook}. This is called the \emph{extrinsic mean.} When $\rho$ is taken to be the geodesic or intrinsic distance, $\hat\theta$ is called the \emph{intrinsic mean}. We will consider estimation of both extrinsic and intrinsic means using our method compared to other optimization techniques. One simple examples of a retraction map for $\mathbb{S}^d$ is \begin{equation*} \mathcal{R}_{\vartheta}v=\frac{\vartheta+v}{|\vartheta+v|}, \end{equation*} where $|\cdot|$ is the Euclidean norm in $\mathbb{R}^{d+1}.$ Therefore the inverse retraction has the following expression \begin{equation*} \mathcal{R}_{\vartheta}^{-1}\theta=\frac{1}{\vartheta^T\theta}\theta-\vartheta. \end{equation*} We first compare our accelerated method against gradient descent optimization and a Newton-type optimization scheme, DANE \citep{Shamir}, and a Nesterov method, RAGD \citep{zhang2018towards}, adapted for manifolds. For all the experiments in this section, we optimized the step size of the optimizer using an Armijo condition backtracing line seach \citep{armijo1966minimization} where we reduce the step size by a factor of $ .95 $ until the difference between the old loss function evaluation and the new one is $ 10^{-5} \times .95$. For our Catalyst algorithm manifold we set the A2 budget to $ S=10 $, the A1 number of iterations to $ T=5 $, and cutoff parameter for A1 is initialized at $ .1 $. For the DANE results, we set the regularization term to $ 1 $, For RAGD we set the shrinkage parameter to $ 1 $. Our synthetic data set is 10,000 observations generated i.i.d from a $100$ dimensional $\mbox{N}(0, I)$ distribution projected onto $ \mathbb{S}^{99} $. We run each optimization routine for 100 iterations. Figure~\ref{fig:intrinsic_comparison} and Figure~\ref{fig:extrinsic_comparison} shows that our novel accelerated method converges, for an intrinsic mean as well as an extrinsic mean example, to an optima in fewer iterations than the other competing methods, both in terms of the loss function value and the norm of the loss function gradient. Moreover, we can see in the intrinsic mean example, our method is able to obtain a smaller loss function and gradient norm than the competing methods. In the extrinsic mean example, our method obtains a comparable loss function value and MSE between the learned parameter and the closed-form expression of the sample mean with other methods in fewer iterations and obtains a smaller gradient norm than the competing methods. By explicit calculation we show the objective functions are strongly convex over a neighborhood of any point on the manifold (see the Appendixx for a proof). This is a highly non-trivial task for general objective functions, hence necessitating an adaptive method such as ours. Moreover, in the extrinsic mean example, since we have a closed form expression of the Fr\'echet mean we also show that our optimization approach converges to the true extrinsic mean in terms of mean squared error faster than the other optimization methods. \begin{figure} \centering \includegraphics[width=.5\linewidth]{figs/intrinsic_iterations.png}\includegraphics[width=.5\linewidth]{figs/intrinsic_wall_time.png} \caption{Intrinsic mean comparison on spheres} \label{fig:intrinsic_comparison} \end{figure} \begin{figure} \centering \includegraphics[width=.5\linewidth]{figs/extrinsic_iters.png}\includegraphics[width=.5\linewidth]{figs/extrinsic_wall_time.png} \caption{Extrinsic mean comparison on spheres} \label{fig:extrinsic_comparison} \end{figure} \subsection{Real data analysis: the Netflix example}\label{subsec:netflix} Next, we consider an application of our algorithm to the Netflix movie rating dataset. This dataset of over a million entries, $ X \in \mathbb{R}^{M \times N} ,$ consists of $M = 17770$ movies and $N = 480189$ users, in which only a sparse subset of the users and movies have ratings. In order to build a better recommendation systems to users, we can frame the problem of predicting users' ratings for movies as a low-rank matrix completion problem by learning the rank-$ r $ Grassmannian manifold $ U \in \mbox{Gr(M, r)} $ which optimizes for the set of observed entries $ (i,j) \in \Omega$ the loss function \begin{equation} L(U) = \frac{1}{2} \sum_{(i,j) \in \Omega} \left\{ (UW)_{ij} - X_{ij} \right\}^{2} + \frac{\lambda^{2}}{2} \sum_{(i,j) \notin \Omega}(UW)_{ij}, \end{equation} where $W$ is an $r$-by-$N$ matrix. Each user $ k$ has the loss function $\mathcal{L}(U,k)=\frac{1}{2} \left| c_k\circ\left( Uw_k(U)-X_k \right) \right|^{2}$ , where $\circ$ is the Hadamard product, $(w_k)^{i}=W_{ik},$ and \begin{equation*} \begin{split} (c_{k})^i=\begin{cases} 1, & {\rm if} \ \ \ (i, k)\in \Omega \\ \lambda, & {\rm if} \ \ \ (i, k)\notin \Omega \end{cases}, \qquad (X_{k})^i=\begin{cases} X_{ik}, & {\rm if} \ \ \ (i, k)\in \Omega \\ 0, & {\rm if} \ \ \ (i, k)\notin \Omega, \end{cases}\\ w_k(U)=\big(U^T{\rm diag}(c_k\circ c_k)U\big)^{-1}U^T\big(c_k\circ c_k\circ X_k\big). \end{split} \end{equation*} This results in the following gradient \begin{align*} \nabla\mathcal{L}(U, k)&= \big(c_k\circ c_k\circ(Uw_k(U)-X_k)\big)w_k(U)^T\\ &= {\rm diag}(c_k\circ c_k)(Uw_k(U)-X_k)w_k(U)^T. \end{align*} For this problem on Grassman manifolds, we have the retraction map: \begin{equation} \mathcal{R}_{V}U = U+V \end{equation} and the inverse retraction map: \begin{equation} \mathcal{R}_{V}^{-1}U = V - U(U^{T}U)^{-1}U^{T}V \end{equation} We look at a comparison of our method against a standard gradient descent method on a subset of the data where we only observe a million ratings ($ \approx 1.5\%$ of the full data set). In this setting we fix the matrix rank $ r=5 $ and the regularization parameter $ \lambda = .01 $. Figure~\ref{fig:parallel_netflix} shows that our accelerated method obtains a smaller loss function value, a smaller identical test set MSE, and nearly identical loss gradient norm faster than RAGD, DANE, or a typical gradient descent approach. On a large scale, we apply a parallelized version of our accelerated method and a communication-efficient parallel algorithm on manifolds proposed in \cite[ILEA]{lizhennips2018} on the full Netflix dataset. We randomly distribute the data across 64 processors and run the optimization routine for 200 iterations. In Figure~\ref{fig:parallel_netflix}, again we can see steady acceleration that our method provides in terms of the loss function value across iterations and the loss of gradient norm though ILEA obtains slightly better test set MSE than our method. \begin{figure} \centering \includegraphics[width=.5\linewidth]{figs/netflix_wall_time.png}\includegraphics[width=.5\linewidth]{figs/parallel_netflix_iters.png}\\ \includegraphics[width=.5\linewidth]{figs/netflix_iters.png}\includegraphics[width=.5\linewidth]{figs/parallel_netflix_wall_time.png} \caption{Results for the parallel (right) and reduced (left) Netflix example.}\label{fig:parallel_netflix}. \end{figure}
proofpile-arXiv_059-15765
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} In 1932 Neumann (\cite{N}) investigated subgroups $N^{*}$ of the homogenous modular group $SL(2, \mathbb{Z})$ which are defined by the condition $(N)$. \begin{itemize} \item[$(N)$] For any ordered pair of relative prime integers $(a,c)$ $N^{*}$ contains exactly one matrix in which the first column consist of the ordered par $(a,c)$ \end{itemize} Neumann investigated these subgroups in connection with problems of foundation of geometry. In 1973 Magnus explored subgroups $N$ of the modular group $M$ such the its natural extension $N^{*}$ by the central element of $SL(2, \mathbb{Z})$ has Neumann $(N)$. He proved they are maximal nonparabolic subgroups of the modular group $M$ (\cite{M}). In \cite{H3}, \cite{BHp}, \cite{H} it was considered the notion of the distant graph over a ring with the identity, which is an combinatoric object represented the projective line $\mathbb{P}(R)$ over a ring $R$ \cite{H2}, \cite{H3}. In the case of the integers the vertices as the elements of $\mathbb{P}(Z) \simeq Q\cup\{\infty \}=:\bar Q$ are all cyclic submodules of the $Z$-module $Z^2$ generated by the vectors with co-prime coordinates. The edges of this graph connect vertices whose generators are the rows of an invertible $(2 \times 2)$-matrix over $Z$. This distant graph we will denote by $\Gamma_Z$. The graph $\Gamma_Z$ is depicted in Fig.~1. Note that we can construct this graph using the Stern-Brocot procedure twice. For the vectors with positive slopes start from $[1,0]$ and $[0,1]$ and for the vectors with negative slopes from $[1,0]$ and $[0,-1]$. To get $\Gamma_Z$ one has to just "glue" the vectors $[0,1]$ and $[0,-1]$. \begin{figure}[ht]{\footnotesize \textbf{ Fig. 1} \hspace{2mm} Distant graph of $\mathbb Z$}\label{dgoi} \centering \includegraphics[width=1\textwidth]{Z_2CNo.png} \end{figure} On $\bar Q$ acts the extended modular group $\widehat{M}$ as the group of all linear fractional transformations of the form $$\alpha (z) \mapsto\frac {az+b}{cz+d},$$ where $ a,\,b,\,c,\,d \in Z$, $\;\alpha(\infty)=\frac{a}{c}$, $\;\alpha\left(-\frac{d}{c}\right)=\infty\;$ and $\;ad-bc=\pm 1$. The group $\widehat{M}$ contains as an normal subgroup the modular group $M$ of transformations with the determinant 1. We will use the following presentation of the group $\widehat{M}$ (\cite{Mu}) : $$ \langle \omega,\tau,\nu\;|\;\; \omega^2 =\nu^2=(\omega\nu)^2=(\omega\tau\nu)^2=(\omega\tau)^3=1\rangle,$$ where $\tau(z)=z+1$, $\omega(z)=-\frac{1}{z}$ and $\nu(z)=-z$.\\ Using these generators we have the following presentation of $M$ $$\langle \omega,\tau\;|\;\; \omega^2=(\omega\tau)^3=1\rangle.$$ In \cite{MS2} it was proved that the distant graph $\Gamma_Z$ is a Cayley one and then in \cite{MS3} there were constructed uncountably many its Cayley representation. Inherently, it was proven that its Cayley representations in the modular group are Neumann subgroups, but not stated explicitly. After sending our paper we find works of Magnus( \cite{M}), Brennen, Lyndon (\cite{BL1},\cite{BL2}) and realise that our research overlap in part with those in these works. This paper is a continuation of our project to find all Cayley representations of $\Gamma_Z$ in $PGL(2, \mathbb{Z})$ (\cite{MS1}, \cite{MS2}, \cite{MS3}) and it completes the series of works devoted to the description of Cayley's groups of $\mathbb{P}({Z})$. Because automorphisms groups of $\Gamma_Z$ is $PGL(2, (\mathbb{Z})$ it gives all Cayley groups of $\Gamma_Z$. To this purpose we extended the original definition of Neumann subgroup (comp. \cite{J}, \cite{T} ) to subgroups of $\widehat{ M}.$ Analogously to the "modular" definition from papers of Magnus (\cite{M}), Tretkoff (\cite{T}), Brenner-Lyndon (\cite{BL1}) we state the following. \begin{defn}\label{defN} A subgroup $\widehat{S}\subset\widehat{M}$ is called a \emph{Neumann subgroup} of $\widehat M$ if for every $r \in\bar Q$ there exist exactly one $\alpha\in\widehat{S}$ such that $\alpha(\infty)=r$. \end{defn} In the paper we get the following description of all Cayley representations of $\Gamma_Z$. If $\widehat{S}$ is any Cayley group of $\Gamma_Z$ then the following conditions are equivalent: \begin{enumerate} \item $\widehat{S}$ is a Neumann subgroup of $\widehat{M}$; \item the set $\{ \tau^n,\;\tau^n \nu\,:\; n \in \mathbb{Z} \}$ form a complete system of distinct right coset representatives of $\widehat{S}$ in $\widehat{M}$; \item there exist an involution $\iota \colon \mathbb{Z} \to \mathbb{Z}$ satisfying $$\iota(\iota - \delta_n) = \iota(n+1) + \delta_{n+1}$$ such that $\widehat{S} = \{\sigma_n : n \in \mathbb{Z} \}$, where $$\sigma_n(z) = \frac{nz-n\iota(n)-\delta_n}{z-\iota(n)},\;\;\; \delta_n = \det \sigma_n.$$ \end{enumerate} If $\widehat{S} \subset M$ the description of this structure is contained in the Theorem~3.1 from the Brenner-Lyndon paper \cite {BL1} (see also \cite{S}). We prove this equivalence in the more general situation of subgroups in $\widehat{M}$ and in contrast to their algebraic proof our one is geometrical and uses technics involving the distant graph of $\mathbb{P}(Z)$. Moreover it is possible to obtain the following presentation of $\widehat{S}$: \begin{align*} \widehat{S} = \left< \sigma_n\;|\;\; \sigma_n \sigma_{\iota(n)}= \sigma_n \sigma_{\iota(n)+ \delta_n} \sigma_{\iota(n-1)} = 1,\; n \in \mathbb{Z} \right> \end{align*} We do not include the proof of this fact because it is long and laborious but it consists in the typical application of the Reidemeister-Schreier procedure. We will prove in the forthcoming paper that if $\widehat S$ is a Neumann subgroup then $\widehat S$ is a free product of some numbers of groups of order $2$, groups of order $3$ and infinite cyclic groups, likewise subgroups of $M$. The difference is that if a Neumann subgroup is not contained in $M$, it has to posses free generators of negative determinant. Moreover it is possible to retrieve the set of independent generators from the above presentation. Additionally, if $\widehat{S} \not \subset M$ then $S = \widehat{S} \cap M$ is a normal, nonparabolic subgroup of index 2 of $\widehat{S}$. In the last section we describe in details structures of both groups $\widehat S$ and $S$ and the connection between them. Let $\widehat{r_2}$, $\widehat{r_3}$ denote the numbers of a free generators of order $2$ and $3$, respectively. Then let $\widehat r_\infty^\pm$ denote the number of free generators with the determinant equal to $\pm 1$, and $\widehat r_\infty=\widehat r_\infty^++\widehat r_\infty^-$. Then we have the following restrictions \begin{itemize} \item $\widehat{r_2}+ \widehat{r_3}+ \widehat{r}_{\infty} = \infty$ \item $ \widehat{r}_{\infty}^- \geq 1$ and if $\widehat{r}_{\infty}^+$ is finite then it is even \item $\widehat{r_2}+ \widehat{r_3}+ \frac{\widehat{r}^+_{\infty}}{2} \geq \widehat{r}_{\infty}^-$ \end{itemize} Moreover the group S is a free product of $2\widehat{r_2}$ groups isomorphic with $C_2$, $2\widehat{r_3}$ subgroups isomorphic with $C_3$ and $2\widehat{r}_{\infty}-1$ subgroups isomorphic with $\mathbb{Z}$. The proof of this facts we postpone to next paper because we need to use the coset graph method, which is not presented it this article. Finally using the construction of an involution $Z$ from the work \cite{MS3} we show an realization of each group with the above parameters. \section{ Cayley representations of the $\mathbb Z$-distant graph} For the purpose of this subsection it is more convenient to use the language of matrices so we treat the extended modular groups as a quotient of $GL(2,Z)$: $\widehat M\simeq PGL(2,Z)=GL(2,Z)/\{\pm I^*\}$, where $I^*$ denotes the identity matrix. The elements of $GL(2,Z)$ we will also denote by Greek lowercases with the asterisk as a superscript and their projections by the natural homomorphism $\Pi$ onto $PGL(2,Z)$ by Greek lowercases (the same as the elements of $\widehat M$). Precisely a map $z\longmapsto\frac{az+b}{cz+d}$ is understood as $\Pi(\alpha^*)$, where $\alpha^*=\scriptsize{\pm\left(\begin{array}{ll} a &b \\&\\ c & d \end{array}\right)}_.$ We aim to show that Neumann subgroups of $\widehat M$ are precisely Cayley representations of the distant graph $\Gamma_Z$ of $\mathbb P(Z)\simeq \bar Q$. Since for every graph its Cayley representation can be treated as a subgroup of its automorphisms group we need the following observation. \begin{prop}\label{autdg} The automorphisms group of the distant graph on $\mathbb P( Z)$ is isomorphic to $PGL(2, Z)$. \end{prop} \begin{proof} We use the notion of a maximal clique in a graph: the set of pairwise adjacent vertices is called a \emph{clique}. If a clique is maximal with this property then it is called a \emph{maximal clique}. We also will need a notion of a harmonic quadruple in a distant graph. For the definition we refer to \cite{He}, p.787. By Lemma~1 of (\cite{MS1}) we know that a subset $\{v_1,v_2,v_3,v_4\}\subset V(\Gamma_Z)$ forms a harmonic quadruple iff \begin{enumerate} \item $(v_i,v_k,v_j,v_l,v_i)$ is a cycle of successively adjacent vertices and \item either $(v_i,v_j)\in E(\Gamma_Z)$ or $(v_k,v_l)\in E(\Gamma_Z)$, \end{enumerate} where $\{i,j,k,l\}=\{1,2,3,4\}$. By inspection of Fig.~1 one can check that for every maximal clique $C$ and every $v\in V(\Gamma_Z)$ there exists a finite sequence of harmonic quadruples $(Q_1,\dots,Q_n)$ such that \begin{enumerate} \item $C\subset Q_1$, \item $Q_i\cap Q_{i+1}$ is a maximal clique, \item $v\in Q_n$. \end{enumerate} Now we are in a position to start the proof. Obviously $ PGL(2, Z)$ is a subgroup of $Aut(\Gamma_Z )$ and $ PGL(2, Z)$ acts transitively on the set of ordered maximal cliques of $\Gamma_Z$. Let $\alpha\in Aut(\Gamma_Z)$ and $C=(v_1,v_2,v_3)$ be an arbitrarily chosen ordered maximal clique in $\Gamma_Z$. There exist $\eta\in PGL(2, Z)$ that sends the image of $(v_1,v_2,v_3)$ under $\alpha$ to its original position, i.e. $((\eta\circ \alpha)v_1,(\eta\circ \alpha)v_2,(\eta\circ \alpha)v_3)=(v_1,v_2,v_3)$. We will show that $ \alpha=\eta^{-1}\in PGL(2, Z)$. Fix an arbitrary vertex $v\in V(\Gamma_Z)\setminus C$ and let $(Q_1,\dots,Q_n)$ satisfy 1., 2. and 3. Since every automorphism of $\Gamma_Z$ sends a harmonic quadruple to a harmonic one, every automorphism that fixes any of three members of a harmonic quadruple necessary fixes the fourth one. Therefore a simple induction argument yields $(\eta\circ \alpha) v=v$. We have shown that $\eta\circ \alpha=I:=\Pi(I^*)$. \end{proof} \begin{thm}\label{cayleyneumann} A subgroup of $\widehat M$ is a Neumann subgroup iff it is isomorphic to some Cayley representation of the distant graph of $\mathbb P( Z)$. \end{thm} \begin{proof} Assume that $\widehat S \subset \widehat M$ is a Neumann subgroup. Then, $\widehat S$ treated as a subgroup of $PGL(2,Z)$, by definition acts on $\mathbb P(Z)$ freely and sharply vertex-transitively, and thus by the Sabidussi theorem $\widehat S$ is a Cayley representation of $\Gamma_Z$. Conversely, for a Cayley representation $\widehat S$ of $\Gamma_Z$ and each $v\in V(\Gamma_Z)$ there exists precisely one $\alpha\in\widehat S$ with $\alpha e=v$, where $e:= \pm\scriptsize{\left[\begin{array}{c}1\\0\\\end{array}\right]}$, hence $\widehat S$ is a Neumann subgroup of $\widehat M$. \end{proof} Every Neumann subgroup of $\widehat M$, hence a Cayley representation of $\Gamma_Z$, defines some involution of $Z$ in the following way.\\ First observe that having a Cayley representation $(\widehat S,\mathcal G,\varphi)$ we may assume that $\varphi(1)=e$. Indeed, let $(\widehat S,\tilde{\mathcal G},\tilde\varphi)$ be a Cayley representation of $\Gamma_Z$. We get the required representation by putting $$\mathcal G=\tilde\varphi^{-1}(e)\,\tilde{\mathcal G}\,(\tilde\varphi^{-1}(e))^{-1},\;\;\;\varphi(\alpha)=\tilde\varphi(\alpha\,\tilde\varphi^{-1}(e)).$$ From the proof of the Sabidussi theorem it follows that if in the Cayley representation of $\Gamma_Z$ $e$ is labeled by $1$ then $\varphi(\alpha)=\alpha e$ no matter of the choice of $\widehat S$. Therefore from now on we will not indicate $\varphi$ in the Cayley representations.\\ Now the neighbourhood of $e$ consists of vertices $v_n=\pm\scriptsize{\left[\begin{array}{c}n\\1\\\end{array}\right]}\in\mathbb P( Z)$, $n\in Z$, hence $\mathcal G=\{\sigma_n:=\varphi^{-1}(v_n):\;n\in\mathbb Z\}$. As $\mathcal G=\mathcal G^{-1}$ we have $\sigma_n^{-1}=\sigma_{\iota(n)}$ for some bijection $\iota: Z\longrightarrow Z$. Since $\sigma_{\iota(\iota(n))}=\sigma_{\iota(n)}^{-1}=\sigma_n$, $\iota$ is an involution. The equality $\sigma_k e=v_k$, $k\in\mathbb Z$, applied twice to $n$ and $\iota(n)$ yields \begin{equation}\label{gener} \sigma_n=\Pi(\sigma_n^*),\; n\in\mathbb Z, \end{equation} where $$\sigma_n^*=\left(\begin{array}{ll} n & -n\iota(n)-\delta_n \\&\\ 1 & -\iota(n) \end{array}\right)_.$$ Note that $\delta_n=det\, \sigma_n=det \,\sigma_{\iota(n)}=\delta_{\iota(n)}$. We also have \begin{equation}\label{S} \sigma_k^{-1}\sigma_l\in \mathcal G \;\;\;\mbox{ iff }\;\;\;|k-l|=1. \end{equation} and straightforward calculations shows that $$\sigma_n^{-1}\sigma_{n+1}=\sigma_{\iota(n)-\delta_n}.$$ Therefore the right lower term of $\sigma_{\iota(n)-\delta_n}^*$ equals to $-(\iota(n+1)+\delta_{n+1})$. It follows that the involution $\iota$ satisfies \begin{equation}\label{iota} \iota(\iota(n)-\delta_n)=\iota(n+1)+\delta_{n+1},\;\;\;\delta_n=\delta_{\iota(n)}, \end{equation} where $(\delta_n)\in\{-1,1\}^{ Z}$. In the sequel we frequently will use the equivalent form of (\ref{iota}): \begin{equation}\label{iotaeq} \iota(\iota(n)-\epsilon\delta_n)= \iota(n+\epsilon) +\epsilon\delta_{n+\epsilon},\;\;\;\delta_n=\delta_{\iota(n)}, \end{equation} $\epsilon\in\{-1,+1\}$. We will need the following equality \begin{equation}\label{sigmaiota} \sigma_n=\tau^n\omega\nu^{\frac{1-\delta_n}{2}}\tau^{-\iota(n)}. \end{equation} Conversely, given an involution of $Z$ satisfying (\ref{iota}) we can define a subgroup of $PGL(2,Z)$ generated by elements defined by (\ref{gener}). We will say that such subgroups of $\widehat M$ are \emph{generated by an involution}. \\ Note that (\ref{iotaeq}) yields the following relations in the subgroups generated by involutions: \begin{equation}\label{rel1PGl} \sigma_n\sigma_{\iota(n)}=I, \end{equation} \begin{equation}\label{rel2PGl} \sigma_n\sigma_{\iota(n)-\epsilon\delta_n}\sigma_{\iota(n+\epsilon)}= I, \end{equation} It is possible, but a bit laborious, to show that in fact those relations form presentations of such groups. We will need versions of those relations in $GL(2,\mathbb Z)$: \begin{equation}\label{rel1Gl} \sigma_n^*\sigma^*_{\iota(n)}=-\delta_n\cdot I^* \end{equation} \begin{equation}\label{rel2Gl} \sigma_n^*\sigma^*_{\iota(n)-\epsilon\delta_n}\sigma^*_{\iota(n+\epsilon)}=\epsilon\delta_n\delta_{n+\epsilon}\cdot I^*. \end{equation} \begin{thm}\label{neumanninv} A subgroup of $\widehat M$ is a Neumann subgroup iff it is generated by an involution. \end{thm} \begin{proof} It was already shown that every Neumann subgroup of $\widehat M$ is always generated by an involution. To show the converse assume that $\widehat S$ is generated by an involution $\iota$ and let $\widehat S^*=\Pi^{-1}(\widehat S)$. Obviously $\widehat S^*=\langle\mathcal G^*\rangle$, where $\mathcal G^*=\Pi^{-1}(\mathcal G)=\{\pm \sigma^*_n:\;n\in\mathbb Z\}=(\mathcal G^*)^{-1}$. We consider the (non-directed) Cayley graph $\Gamma_Z^*$ of $(\widehat S^*,\mathcal G^*)$.\\ We will inductively build some subgraph $\tilde\Gamma_Z^*$ of $\Gamma_Z^*$. Start from $I^*$ and consider two edges $\{I^*,\epsilon \sigma^*_0\}$, $\epsilon=\pm1$. In the sequel the fact that $\{\alpha^*,\beta^*\}\in E(\Gamma_Z^*)$ will be denoted by $\alpha^*\vartriangle\beta^*$. With each of these edges associate a vertex $\epsilon \sigma^*_{\epsilon}$. Obviously $\epsilon \sigma^*_\epsilon\vartriangle I^*$. Observe that by (\ref{S}) we have $\epsilon \sigma^*_\epsilon\vartriangle\epsilon \sigma^*_0$ as well. Moreover denoting $e^*=\scriptsize{\left[\begin{array}{c}1\\0\\\end{array}\right]}$ we have $$I^*e^*+(\epsilon \sigma^*_0)e^*=(\epsilon \sigma^*_{\epsilon})e^*.$$ \\ Now we describe the general step of induction. To each edge $\{\alpha^*,\alpha^*(\epsilon \sigma^*_k)\}$ obtained in the previous step associate the vertex $\beta^*=\alpha^*(\epsilon \sigma^*_{k+\epsilon})\vartriangle \alpha^*$. \\ Observe that although the definition of $\beta^*$ is not symmetric, the vertex $\beta^*$ itself is in fact independent on the order of defining vertices. Indeed, to the ordered edge $(\alpha^*(\epsilon \sigma^*_k),\alpha^*)=(\alpha^*(\epsilon \sigma^*_k),\alpha^*(\epsilon \sigma^*_k)(-\epsilon\delta_k\sigma^*_{\iota(k)}))$ (we use here (\ref{rel1Gl})) we associate the vertex $\alpha^*(\epsilon \sigma^*_k)(-\epsilon\delta_k\sigma^*_{\iota(k)-\epsilon\delta_k})$ that is by (\ref{rel2Gl}) equal to $\beta^*$. Of course $\beta^*\vartriangle \alpha^*(\epsilon \sigma^*_k)$. Moreover it can be easily checked that \begin{equation}\label{isogr} \alpha^*e^*+\alpha^*(\epsilon \sigma^*_k)e^*=\beta^*e^*. \end{equation} This simply means that first columns of vertices of the starting edge sum up to a first column of the associate vertex.\\ Starting from $\epsilon=1$ and then from $\epsilon=-1$ we build two subgraphs of $\Gamma_Z^*$. Each of them forms the Stern-Brocot diagram of slopes with the sign $\epsilon$. They have in common the vertex $I^*$ and together give the graph $\tilde\Gamma_Z^*$. Consider a contraction of $\tilde\Gamma_Z^*$ via $\sigma^*_0\sim_0-\sigma^*_0$ that gives the graph $\tilde\Gamma_Z^*/\sim_0$. From (\ref{isogr}) it follows immediately that the map $\alpha^*\mapsto\alpha^*e^*$ gives the graph isomorphism between $\tilde\Gamma_Z^*/\sim_0$ and $\Gamma_Z$. Now observe that by construction each vertex $\alpha^*$ in $\tilde\Gamma_Z^*/\sim_0$ has a neighborhood consisting vertices $\epsilon_n\alpha^*\sigma^*_n$, $n\in\mathbb Z$, $\epsilon_n\in\{-1,1\}$. From this it follows that after contracting $\Gamma_Z^*$ via $\gamma^*\sim-\gamma^*$ we get the graph $\Gamma_Z^*/\sim$ isomorphic to $\tilde\Gamma_Z^*/\sim_0$ and on the other hand to $\Gamma_Z(\mathcal G,\mathcal S)$. Therefore $\Gamma_Z(\mathcal G,\mathcal S)$ and $\Gamma_Z$ are isomorphic, thus $\widehat S$ is a Cayley representation of $\Gamma_Z$, hence, by Theorem~\ref{cayleyneumann}, a Neumann subgroup of $\widehat M$, We have completed our proof. \end{proof} We finish this section with the minor remark about another possible definition of a Neumann subgroup of $\widehat M$. Let us denote $\widehat T=\langle \tau,\nu\rangle$ and assume that $\widehat S$ is a Neumann subgroup of $\widehat M$. We have $\widehat S\cap\widehat T=\{I\}$ since $\tau^n(\infty)=\tau^n\nu(\infty)=I(\infty)$. By definition for each $\alpha \in \widehat M$ there is $\beta\in \widehat S$ with $\alpha(\infty)=\beta(\infty)$. Obviously $\beta^{-1}\alpha\in \widehat T$, thus we get $\widehat S\widehat T=\widehat M$. Now assume that $\widehat S$ satisfies \begin{enumerate} \item $\widehat S\widehat T=\widehat M$ and \item $\widehat S\cap\widehat T=\{I\}$. \end{enumerate} In other words above conditions say that the members of $\widehat T$ form a complete system of distinct right coset representatives of $\widehat S$ in $\widehat M$.\\ Now let us fix $r\in \bar Q$ and take an arbitrary $\beta\in\widehat M$ with $\beta(\infty)=r$. Form 1. we have $\alpha\in\widehat M$ and $\gamma\in\widehat T$ such that $\beta=\alpha\gamma$, hence $\alpha(\infty)=\beta(\gamma^{-1}(\infty))=\beta(\infty)=r$. Further, if $\alpha$, $\alpha'\in\widehat S$ are with $\alpha(\infty)=\alpha'(\infty)$ then $\alpha'\alpha^{-1}(\infty)=\infty$, thus $\alpha^{-1}\alpha'\in\widehat S\cap\widehat T$. By 2. $\alpha=\alpha'$. We have shown that the conditions 1. and 2. can be taken as an alternative definition of the notion of a Nuemann subgroup of $\widehat M$ (compare the definition of a Neumann subgroup of $M$ from \cite{BL1} or \cite{BL2}). \section{Structure of Neumann subgroups of the extended modular group} In this section we provide two theorems that completely describe structure of Neumann subgroups of the extended modular group. The proof of the first one can be found in either \cite{BL1} or derived from \cite{S}. The second theorem is new but we postpone its proof to the forthcoming paper since it requires methods we do not develop in this paper. The following theorem describes the structure of Neumann subgroups of the modular group. From the Kurosh Subgroup Theorem a subgroup $S$ of the modular group is a free product of $ r_2$ subgroups of order $2$, $r_3$ subgroups of order $3$ and $r_\infty$ of infinite cyclic subgroups. The fact that any group $H$ is such a free product we express saying that $H$ has $(r_2,r_3,r_\infty)$-struc\-ture. \begin{thm}[\cite{BL1},\cite{S}] \label{N} If $S$ is a Neumann subgroup of $M$ then $S$ has $(r_2,r_3,$ $r_\infty)$-struc\-ture subject to the conditions that $r_2+r_3+r_\infty=\infty$ and if $r_\infty$ is finite then its even. Moreover every structure satisfying the above conditions is realized by some Neumann subgroup of $M$. \end{thm} If we assume that a Neumann subgroup of $\widehat M$ is not contained in $M$ then the situation becomes essentially more complicated and we describe it in the theorem below. As was announced in Introduction we postpone the proof to the forthcoming paper. We only make here some remarks on a number of independent generators in groups $\widehat S$ and $S=\widehat S\cap M$.\\ It is easy to observe that the set $$\{\sigma_n, \alpha\sigma_n\alpha^{-1}: \; det\,\sigma_n=1\}\cup\{\alpha\sigma_n,\,\sigma_n\alpha^{-1}:\; det\,\sigma_n=-1\},$$ where $\alpha \in\widehat S$ is an arbitrary element of the determinant equal to $-1$, generates $S$. Assume now that $\alpha$ is taken to be equal to some of $\sigma_n$'s and the set $D$ of generators of $\widehat S$ with the negative determinant is finite. Then the set $\alpha D\cup D\alpha^{-1}\setminus \{I\}$ has odd cardinality. Moreover all other generators are doubling. We are not able to prove the below theorem by this method since it does not allow to prove that there is a subset of $\{\sigma_n\}$ of independent generators (which actually is true). This is just a hint regarding statements about cardinality of the sets of different kinds of generators. Recall that $\widehat r_2$, $\widehat r_3$ denote the number of independent generators of order $2$ and $3$, respectively, and that $\widehat r^\pm_\infty$denote the number of free generators with the determinant equal to $\pm1$, $\widehat r_\infty=\widehat r^+_\infty+\widehat r^-_\infty$. \begin{thm}\label{NnN} Let $\widehat S$ be a Neumann subgroup of $\widehat M$ that is not entirely contained in $M$ and let $S=\widehat S\cap M$. Then $S\lhd\widehat S$, $\widehat S/S\simeq C_2$ and $\widehat S$ is never a semi-direct product of $S$ and a subgroup isomorphic to $C_2$. Moreover \begin{itemize} \item $\widehat S$ has $(\widehat r_2,\widehat r_3,\widehat r_\infty)$-structure subject to the conditions that \begin{enumerate} \item $\widehat r_2+\widehat r_3+\widehat r_\infty=\infty$, \item $\widehat r^-_\infty\geq1$ and if $\widehat r^+_\infty$ is finite then its even, \item $\widehat r_2+\widehat r_3+\frac{\widehat r^+_\infty}{2}\geq\widehat r^-_\infty$ and all independent generators of finite order are elliptic; \end{enumerate} \item if $S$ has $(r_2, r_3,r_\infty)$-structure then the following equalities hold $$(r_2, r_3,r_\infty)=(2\widehat r_2,2\widehat r_3,2\widehat r_\infty-1).$$ \end{itemize} \end{thm} Analogously to the situation in Theorem~\ref{N} it is possible to realize any admissible structure for a Neumann subgroup of the extended modular group. Recall that in order to describe some Neumann subgroup it is enough to define appropriate involution $\iota$ of the set of integers. The recursive method of such a construction is given in \cite{MS3}. For the sickness of completeness we describe this method below.\\ A map $\tilde\iota:\{k,\ldots,k+l\}\longrightarrow\{k,\ldots,k+l\}$, $k\in \mathbb Z$, $l\geq 0$, is called a \emph{building involution} if: \begin{itemize} \item $\tilde\iota$ is an involution of $\{k,\ldots,k+l\}$; \item $\tilde\iota(k)=k+l$; \item $\tilde\iota$ satisfies (\ref{iota}) for each $n=k,\ldots,k+l-1$. \end{itemize} Note that we can freely shift the domain of a building involution along integers. Given two building involutions one may construct another one. Indeed, let $$\iota_j:\{k_j,\ldots,k_j+l_j\}\longrightarrow\{k_j,\ldots,k_j+l_j\},$$ $j=0,1$ be building involutions with $k_1=k_0+l_0+1$. Define $$\tilde\iota=\iota_0\sqcup\iota_1:\{k_0-1,\ldots,k_1+l_1+1\}\longrightarrow\{k_0-1,\ldots,k_1+l_1+1\}$$ by \begin{itemize} \item $\tilde\iota(k_0-1)=k_1+l_1+1$; \item $\tilde\iota|_{\{k_j,\ldots,k_j+l_j\}}=\iota_j$, $j=0,1$. \end{itemize} Let us choose a sequence of building involutions $\iota_n:\{k_n,\ldots,k_n+l_n\}\longrightarrow\{k_n,\ldots,k_n+l_n\}$, $n\in \mathbb N$ satisfying $k_0=-1$, $k_1=l_0$, $k_{n+1}=k_n+l_n+2$ for $n\geq1$. We define an involution $\iota=\bigsqcup_{n=0}^\infty\iota_n:\mathbb Z\longrightarrow\mathbb Z$ as a "limit" construction: \begin{equation}\label{siota} \iota_0,\;\iota_0\sqcup\iota_1,\;\iota_0\sqcup\iota_1\sqcup\iota_2,\, \ldots\, ,\iota_0\sqcup\iota_1\sqcup\iota_2\sqcup\ldots\sqcup\iota_n,\,\ldots\;, \end{equation} i.e. we require that $\iota|_{\{-n-1,\ldots,k_{n}+l_{n}+1\}}=\iota_0\sqcup\iota_1\sqcup\iota_2\sqcup\ldots\sqcup\iota_{n}$ for each $n\geq1$. If we assume that $\delta_{k_n}=1$ for all $n$ then it follows immediately from the construction that such defined $\iota$ satisfies (\ref{iota}). \begin{thm}\label{str} Let $\widehat r_2$, $\widehat r_3$, $\widehat r^-_\infty\in\{0,1,\ldots\}\cup\{\infty\}$, $\widehat r^+_\infty\in\{0,2,\ldots\}\cup\{\infty\}$ satisfy the conditions $1.$, $2.$ and $3.$ of Theorem~\ref{NnN}. Then there is a Neumann subgroup $\widehat S<\widehat M$ such that $\widehat S\setminus M\neq\emptyset$ and $\widehat S$ has $(\widehat r_2,\widehat r_3,\widehat r^-_\infty+\widehat r^+_\infty)$-structure. \end{thm} \begin{proof} First we define the six building involutions with some properties to describe in a moment and the terms of the required sequence of building involutions will be taken from among those six ones. We require that having chosen a finite sequence of the building involutions the next one to be choose delivers some generators independent of generators brought by previously chosen involutions. In that way we assure that the constructed group will be a free product. Then we have to choose the six building involutions such that each of them brings appropriate independent generators. We decide to take the following building involutions (all are denoted by the same symbol $\iota$): \begin{enumerate} \item $\iota:\{k\}\longrightarrow\{k\}:\;\;\;\iota(k)=k$, $det\,\sigma_k=1$;\\ -- delivers a generator of order $2$;\vspace{2mm} \item $\iota:\{k,k+1\}\longrightarrow\{k,k+1\}:\;\;\;\iota(k)=k+1$, $det\,\sigma_k=det\,\sigma_{k+1}=1$;\\ -- delivers a generator of order $3$;\vspace{2mm} \item $\iota:\{k,\ldots,k+9\}\longrightarrow\{k,\ldots,k+9\}$: \begin{center} \begin{tabular}{c|c|c|c|c|c} $n$ & $k$& $k+1$ & $k+2$ & $k+3$ & $k+5$ \\ \hline $ \iota(n)$ & $k+9$ & $k+4$ & $k+6$ & $k+7$ & $k+8$\\ \end{tabular} \end{center} $det\,\sigma_{k+j}=1$;\\ -- delivers two free generators both of the determinant $1$;\vspace{2mm} \item $\iota:\{k,\ldots,k+6\}\longrightarrow\{k,\ldots,k+6\}$: \begin{center} \begin{tabular}{c|c|c|c|c } $n$ & $k$ & $k+1$ & $k+2$ & $k+3$ \\ \hline $ \iota(n)$ & $k+6$ & $k+4$ & $k+2$ & $k+5$ \\ \end{tabular} \end{center} $det\,\sigma_{k+j}=-1$ iff $j=1,3,4,5$;\\ -- delivers a free generator of the determinant $-1$ and a generator of order $2$;\vspace{2mm} \item $\iota:\{k,\ldots,k+7\}\longrightarrow\{k,\ldots,k+7\}$: \begin{center} \begin{tabular}{c|c|c|c|c } $n$ & $k$ & $k+1$ & $k+2$ & $k+4$ \\ \hline $ \iota(n)$ & $k+7$ & $k+5$ & $k+3$ & $k+6$ \\ \end{tabular} \end{center} $det\,\sigma_{k+j}=-1$ iff $j=1,4,5,6$;\\ -- delivers a free generator of the determinant $-1$ and a generator of order $3$;\vspace{2mm} \item $\iota:\{k,\ldots,k+15\}\longrightarrow\{k,\ldots,k+15\}$: \begin{center} \begin{tabular}{c|c|c|c } $n$ & $k$ & $k+1$ & $k+12$ \\ \hline $ \iota(n)$ & $k+15$ & $k+13$ & $k+14$ \\ \end{tabular} \end{center} and $\iota|_{\{k+2,\ldots,k+11\}}$ is defined like in 3. for the appropriately shifted domain. We assign determinants as follows: $det\,\sigma_{k+j}=-1$ iff $j=1,12,13,14$;\\ -- delivers a free generator of the determinant $-1$ and two free generators of the determinant $1$; \end{enumerate} In every case but 1. we have relations $$\mbox{R($j$)}:\; \sigma_{k+j}=\sigma_{k+j+1}\sigma_{\iota(k+j+1)+\delta_{k+j+1}},\;\;\;j=0,\ldots,l$$ and $$I(j):\;\sigma_{k+j}\sigma_{\iota(k+j)}=1,\;\;\;j=0,\ldots,l+1$$ with appropriately taken $l$. First we show that in each case the building involution delivers some number of independent generators.\\ \textbf{Case 1.} We have just one generator of order $2$.\\ \textbf{Case 2.} We have $l=0$ and we can drop $I(0)\equiv I(1)$ and the generator $\sigma_{k+1}$. After substituting $\sigma_k^{-1}$ instead $\sigma_{k+1}$ in $R(0)$ we are left with just one generator $\sigma_k$ of order $3$.\\ \textbf{Case 3.} We have $l=8$. Obviously we can drop all the relations $I(j)$ and then we can drop the generators $\sigma_{\iota(k+j)}=\sigma_{k+j}^{-1}$ for $j=0,1,2,3,5$. After substituting the left generators into the relations $R(j)$ we see that $R(0)\equiv R(4) \equiv R(8)$, $R(1)\equiv R(3) \equiv R(6)$ and $R(2)\equiv R(5) \equiv R(7)$, thus we are left with relations $R(j)$, $j=0,1,2$. Now we can drop the relation $R(1)$ and the generator $\sigma_{k+3}=\sigma_{k+1}^{-1}\sigma_{k+2}$, then the relation $R(3)$ and the generator $\sigma_{k+5}=\sigma_{k+2}^{-1}\sigma_{k+1}^{-1}\sigma_{k+2}$ and finally the relation $R(0)$ and the generator $\sigma_k=\sigma_{k+1}\sigma_{k+2}^{-1}\sigma_{k+1}^{-1}\sigma_{k+2}$. We are left with two hyperbolic generators $\sigma_{k+1}$, $\sigma_{k+2}$ and no relations. The remaining cases describe building involutions which deliver a generator of infinite order with the negative determinant. As was already announced every such a building involution has to simultaneously deliver at least one generator of order $2$ or at least one generator of order $3$ or at least two hyperbolic generators.\\ \textbf{Case 4.} We have $l=5$. Drop the relations $I(j)$, $j=4,5,6$ and next the generators $\sigma_{\iota(k+j)}=\sigma_{k+j}^{-1}$ and the relations $I(j)$ for $j=0,1,3$. After substituting the left generators into the relations $R(j)$ we see that $R(0)\equiv R(3)\equiv R(5)$ and $R(1)\equiv R(2)\equiv R(4)$, thus we are left with the relations $R(0)$ and $R(1)$. Now we can drop the relation $R(1)$ and the generator $\sigma_{k+3}=\sigma_{k+2}\sigma_{k+1}$, then the relation $R(0)$ and the generator $\sigma_k=\sigma_{k+1}\sigma_{k+2}\sigma_{k+1}$. We are left with the hyperbolic generator $\sigma_{k+1}$ and the generator $\sigma_{k+2}$ of order $2$ and no relation between them. \\ \textbf{Case 5.} We have $l=6$. Drop all the relations $I(j)$ and next the generators $\sigma_{\iota(k+j)}=\sigma_{k+j}^{-1}$ for $j=0,1,2,4$. After substituting the left generators into the relations $R(j)$ we see that $R(0)\equiv R(4)\equiv R(6)$ and $R(1)\equiv R(3)\equiv R(5)$, thus we are left with the relations $R(0)$, $R(1)$ and $R(2)$. Now we can drop the relation $R(1)$ and the generator $\sigma_{k+4}=\sigma_{k+2}^{-1}\sigma_{k+1}$, then the relation $R(0)$ and the generator $\sigma_k=\sigma_{k+1}\sigma_{k+2}^{-1}\sigma_{k+1}$. We are left with the hyperbolic generator $\sigma_{k+1}$ and the generator $\sigma_{k+2}$ of order $3$ and no relation between them.\\ \textbf{Case 6.} We have $l=14$. Considering generators $\sigma_k$ for $j=2,\ldots,11$ we have to consider as well the relations $R(j)$ for $j=2,3,\ldots,10$ and the relations $I(j)$ for $j=2,3,\ldots,11$. Then we may use Case~3 with the domain shifted by $2$. From this we get two hyperbolic generators $\sigma_{k+3}$ and $\sigma_{k+4}$ with the determinants equal to $1$ and no relations. It is left to consider the generators $\sigma_j$ for $j=0,1,11,12,13,14,15$ and the relations $I(j)$ for $j=0,1,12,13,14,15$ and the relations $R(j)$, for $j=0,1,11,12,13,14$. We have to consider $\sigma_{k+11}$ since the relation $R(11)$ has not been use so far. Drop all the remaining relations $I(j)$ and next the generators $\sigma_{\iota(k+j)}=\sigma_{k+j}^{-1}$ for $j=0,1,12$. After substituting the left generators into the relations $R(j)$ we see that $R(0)\equiv R(12)\equiv R(14)$ and $R(1)\equiv R(11)\equiv R(13)$, thus we are left with the relations $R(0)$ and $R(1)$. Now we can drop the generator $\sigma_{k+12}=\sigma_{k+2}^{-1}\sigma_{k+1}$, as generated by $\sigma_{k+1}$ and the two hyperbolic generators $\sigma_{k+3}$ and $\sigma_{k+4}$ already considered, and the relation $R(1)$. Then we may drop the generator $\sigma_k=\sigma_{k+1}\sigma_{k+12}$ and the relation $R(0)$. Finally we are left with three hyperbolic generators, two of the determinant $1$, namely $\sigma_{k+3}$ and $\sigma_{k+4}$, and the generator $\sigma_{k+1}$ of the determinant $-1$, and no relations. We have finished the first step of induction. For each of the six chosen building involutions we have $det \sigma_k=1$. Now assume that we have taken the building involutions $\iota_i$, $i=0,\ldots, n+1$ from our list and that the involution $$\iota_0\sqcup\iota_1\sqcup\iota_2\sqcup\ldots\sqcup\iota_{n}$$ brings some number of independent generators. We have two new generators $\sigma_{n-2}$ and $\sigma_{k_{n+1}+l_{n+1}+1}$. Recall that according to our construction both of them have the determinant equal to $1$. We have to check if the involution $\iota_0\sqcup\iota_1\sqcup\iota_2\sqcup\ldots\sqcup\iota_{n+1}$ have any other independent generators but these delivered by $\iota_0\sqcup\iota_1\sqcup\iota_2\sqcup\ldots\sqcup\iota_{n}$ and $\iota_{n+1}$. The following new relations appear: $$\begin{array}{ll} R'(1): & \sigma_{-n-2}=\sigma_{-n-1}\sigma_{k_n+l_n+2},\\&\\ R'(2):& \sigma_{k_n+l_n+1}=\sigma_{k_n+l_n+2}\sigma_{k_{n+1}+l_{n+1}+1},\\&\\ R'(3):& \sigma_{k_{n+1}+l_{n+1}}=\sigma_{k_{n+1}+l_{n+1}+1}\sigma_{-n-1} \end{array}$$ and $$I':\;\;\sigma_{-n-2}\sigma_{k_{n+1}+l_{n+1}+1}=1.$$ We can drop the generator $\sigma_{k_{n+1}+l_{n+1}+1}=\sigma_{-n-2}^{-1}$ and the relation $I'$. If we substitute the above to the relations $R'(2)$ and $R'(3)$ and use the relations $\sigma_{k_n+l_n+1}\sigma_{-n-1}=1$ and $\sigma_{k_{n+1}+l_{n+1}}\sigma_{k_n+l_n+2}=1$ then we get $R'(1)\equiv R'(2)\equiv R'(3)$. Now we can drop the generator $\sigma_{-n-2}$ and the relation $R'(1)$. \end{proof}
proofpile-arXiv_059-15766
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }
\section{Introduction} \label{intro} National economies consist of interlinked regional economies that react differently to changing macroeconomic forces, government policies, imported materials prices, and technological innovation. Thus, national business cycles (BCs) are an admixture of regional cycles fluctuating diversely. Earlier studies of regional BCs surveyed by \cite{domazlicky_1980} examined how and why cycles differ. By contrast, the advent of the Economic and Monetary Union (EMU) in Europe has renewed research interest in similarities and synchronization among EU states' BCs because synchronization facilitates intra-EMU's fiscal and monetary policies. However, empirical studies often reach divergent conclusions \citep{massmann_2004, grayer_2007, dehaan_2008, montoya_2008}, likely because they use different raw data or methods for estimating cycles and gauging synchronization. For instance, \cite{artis_1999} found that synchronization intensified during the European Exchange Rate Mechanism period (1973--1995), but \cite{massmann_2004} determined periods of synchronization and desynchronization using identical but updated data. Meanwhile, after conducting a comprehensive study using six estimation methods and three measures of synchronization, \cite{kappler_2013} found little support for BC synchronization, and degrees of synchronization fluctuated over time. Euro-area studies of synchronization aroused interest in regional cycles within a country, such as the United States (U.S.) and Japan. For example, \cite{clark_2001} specified that BCs of nine U.S. Census regions are significantly more synchronized than those of EU countries. Moreover, \cite{artis_2011} found that the degree of regional BC synchronization within Japan is strikingly higher than that in the U.S. and the Euro area. These findings imply that national borders may dampen synchronization between regional BCs. Numerous studies regarding the synchronization of regional BCs, whether they cover inter- or \textit{intra}-national BCs, found significant regional differences in the timing of BCs' transitions and duration. Among them, several studies announced noteworthy results. For instance, \cite{grayer_2007} determined a recurring pattern of declining synchronization during expansions in Europe. \cite{hamilton_2012} and \cite{chung_2016} noted that co-movement across states characterizes BC contractions in the U.S. Meanwhile, \cite{wall_2007} concluded that contractions tend to be experienced across most Japanese prefectures. These results garnered via different datasets and methods, when being put all together, suggest that the degree of synchronization between regional BCs intensifies during contractions and diminishes during expansions. We examine this possibility applying a sophisticated method for identifying the time-varying degree of synchronization to regional BC data in the U.S. and Japan. As noted above, national borders may dampen synchronization between regional BCs, and therefore this study concentrates on analyzing the synchronization of regional BCs within a single country as a first step. The method is prominent in nonlinear sciences (e.g., \citealp{pikovsky_2001}:ch.~6) but has been infrequently applied in BC studies.\footnote{ Another way to identify synchronization in time series is through the wavelet transform. Examples of previous studies using the cross-spectrum of wavelet coefficients to gauge synchronization of BCs include \cite{Aguiar_2011} for EU countries and \cite{Aguiar_2017} for U.S. states. } Before describing the method, we note how to extract BCs from raw data. Following recent studies, we acknowledge that BCs are relative to a trend and focus on its deviation. For monthly observations of the Composite Index (CI) of coincident indicators in the U.S. and the Index of Industrial Production (IIP) in Japan, we employ a band-pass filter to extract time series indicating regional BCs in both countries. The Hodrick--Prescott filter (\citealp{hodrick_1997}), a high-pass filter often used in economics, removes only trends with low frequencies. The Baxter--King (BK) (\citealp{baxter_1999}) and Christiano--Fitzgerald (CF) (\citealp{christiano_2003}) band-pass filters are also frequent in the literature, but we employ the Fourier band-pass filter that is mathematically and computationally simpler.\footnote { \cite{ikeda_2013} also use this filter. } This study's method comprises the following three procedures. First, we convert time series fluctuations into two-dimensional oscillations using the Hilbert transform.\footnote{ The Hilbert transform is discussed in detail in Section~\ref{Hilbert}. It has been used occasionally in the economics literature (see \citealp{ikeda_2013}). } This enables us to identify ``phases'' of circular oscillations, defined as a position of a cyclically oscillating variable within one period. Converted oscillations include more information than the original one-dimensional time series and better assess synchronization. Second, we take the ``phase difference'' between two cycles to indicate their synchronization. Third, we use the phase difference to calculate a synchronization index that measures the constancy of the phase difference. If the phase difference of two cycles is nearly constant over time, this index indicates a value near 1, and we designate this situation as (phase) synchronization. In this sense, synchronization does not depend on the level of the phase difference but on its constancy. Our use of this method supports the hypothesis that synchronization between regional BCs intensifies during contractions and diminishes during expansions both in the U.S. and Japan. An overview of other synchronization measures distinguishes our method from others. The most popular measure of synchronization, namely, Pearson correlation coefficient, provides in one number the degree of similarity between series over a sampled period. It measures static relations between the series, whereas synchronization is a dynamic phenomenon with a varying degree over time. Meanwhile, the moving window correlations and new time-varying indexes overcome that deficiency. However, as the \cite{european_2006} mentioned, correlation with a moving window is sensitive to the window's length. \cite{mink_2012} proposed a multivariate, time-varying measure of synchronization based on an output gap. It gauges the percentage of regions over time whose output gap has the same sign as that of the reference region. However, this synchronization measure is nondifferentiable because of its absolute values, and graphs of calculated series exhibit numerous nonessential spikes. By contrast, our method focuses on phase differences between two time series. Calculated using phase differences, the synchronization index of \cite{rosenblum_2001} captures the time-dependent degree of synchronization even if phase differences between two time series are large. The correlation coefficient fails to measure the degree of synchronization because its absolute value can be small in such a case. The study proceeds as follows. Section \ref{data} and \ref{measuring} describe this study's data and methods, respectively. Section \ref{analysis} presents empirical results. Section \ref{conclusion} concludes the paper. \section{Data} \label{data} We employ two datasets frequently used to investigate the regional BCs. One is monthly, seasonally adjusted CI data (2007 average $=100$) in the U.S., spanning from April 1979 to April 2021 (505 months) for 50 states compiled by the Federal Reserve Bank of Philadelphia. The other is monthly raw (i.e., not seasonally adjusted) IIP data (2010 average $=100$) in Japan, spanning from January 1978 to August 2018 (488 months) for 47 prefectures compiled by the Ministry of Economy, Trade, and Industry.\footnote { The CI data for 50 states in the U.S. are from the website of the Federal Reserve Bank of Philadelphia. The IIP data for Japan's 47 prefectures are from NIKKEI NEEDS.} It does not matter whether the data used in the analysis are seasonally adjusted or not because our settings of the band-pass filter can remove the high-frequency component corresponding to the seasonal variation. Figure~\ref{fig:time-series-iip} graphs the time series of CI and IIP data for sampled regions. In particular, Figure~\ref{fig:time-series-iip}~(u1) compares the time series of CI in New York with those in Pennsylvania, New Jersey, and Illinois, the data for which exhibit the greatest synchronization with New York from the viewpoint of our analysis. By contrast, Figure~\ref{fig:time-series-iip}~(u2) compares time series in New York with those in Louisiana, Hawaii, and Utah, the data for which exhibit the least synchronization with New York. Meanwhile, Figure~\ref{fig:time-series-iip}~(j1) compares the time series of IIP in Tokyo with those in Yamagata, Nara, and Akita Prefectures, which exhibit the greatest synchronization with Tokyo. Figure~\ref{fig:time-series-iip}~(j2) compares the time series in Tokyo with those in Okinawa, Miyagi, and Nagasaki Prefectures, showing the least synchronization with Tokyo. \begin{figure}[p] \begin{center} \subfloat{({\bf u1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-series-ci-a1.eps}} \subfloat{({\bf j1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-series-iip-j1.eps}}\\ \vspace{2mm} \subfloat{({\bf u2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-series-ci-a2.eps}} \subfloat{({\bf j2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-series-iip-j2.eps}}\\ \end{center} \caption{Time Series Comparison of the Composite Index and the Index of Industrial Production Data for Sampled Regions.\newline} \begin{spacing}{1.1} Note: Comparison of the composite index data between New York and other three states whose data exhibit the greatest synchronization with New York (i.e., Pennsylvania, New Jersey, and Illinois) (u1), and the least synchronization with New York (i.e., Louisiana, Hawaii, and Utah) (u2). Comparison of the index of industrial production data between Tokyo and other three prefectures whose data exhibit the greatest synchronization with Tokyo (i.e., Yamagata, Nara, and Akita) (j1), and the least synchronization with Tokyo (i.e., Okinawa, Miyagi, and Nagasaki) (j2). \end{spacing} \label{fig:time-series-iip} \end{figure} It is preferable to employ regional CI data for both countries. However, we use IIP for Japan because CI data are unavailable for all prefectures. Figure~\ref{fig:real-filter-IIP-CI} supports our use of IIP data for Japan. It includes monthly time series of IIP (2015 average $=100$: spanning from January 1978 to August 2018) and CI (2015 average $=100$: spanning from January 1985 to August 2018, which are compiled by the Economic and Social Research Institute (ESRI) of the Cabinet Office) for Japan overall. The shadowed area corresponds to business contractions, whereas the white area corresponds to business expansions. Figure~\ref{fig:real-filter-IIP-CI} demonstrates that the timing of peaks and troughs in IIP data duplicate those of CI data even though their deviations from 1991 to 2012 can be large. In short, IIP data still capture the Japanese BCs adequately. \begin{figure}[p] \begin{center} \includegraphics{time-series-iip-ci.eps}\\ \end{center} \caption{Comparison of Time Series of the Index of Industrial Production and the Composite Index Data for Japan Overall.\newline} \begin{spacing}{1.1} Note: Timing of peaks and troughs in the index of industrial production data duplicate those of the composite index even though deviations can be large. \end{spacing} \label{fig:real-filter-IIP-CI} \end{figure} \section{Measuring Synchronization} \label{measuring} We apply three procedures to measure synchronization between two scalar (i.e., one-dimensional) time series. First, we convert fluctuations in each scalar series into two-dimensional oscillations using the Hilbert transform to identify a ``phase'' at each time. Second, we take ``phase differences'' between two cycles as an indicator of their synchronization. Third, using the phase differences, we calculate a synchronization index proposed by \cite{rosenblum_2001} . \subsection{Phase Synchronization} \label{synchro} \textit{Synchronization} is a phenomenon in which multiple oscillations adjust their individual rhythms through mutual interactions to maintain a constant phase difference for a time. This phenomenon is strictly called \textit{phase synchronization} and also called \textit{phase-locking} or \textit{frequency entrainment}. Remark that phase synchronization does not depend on the amplitudes of oscillations. If their amplitudes are identical, it is called \textit{complete synchronization} (\citet[p.~23]{pikovsky_2001}). In the present paper, when we use the term synchronization, we mean phase synchronization. Synchronization is exemplified with the aid of simple oscillators $s^1_t={\sin}(2{\pi}t)$ and $s^2_t=2{\sin}(2{\pi}t - {\pi}/{2})$ in Figure~\ref{fig:time-series-sin-sin2}. The phases of $s^1_t$ and $s^2_t$ are $2{\pi}t$ and $2{\pi}t-{\pi}/{2}$, respectively,\footnote { To be exact, the phase value is restricted to $[-\pi,\pi)$ by taking~$\bmod$ \hspace{-0.21cm} $2\pi$. ``Phase'' is defined in Section~\ref{Hilbert}. } rendering their phase difference as ${\pi}/{2}$. Time series $s^1_t$ and $s^2_t$ are synchronized because their phase difference is constant over time. \begin{figure}[p] \begin{center} \includegraphics{time-series-sin-sin2.eps}\\ \end{center} \caption{Two Synchronized Time Series with a Temporally Constant Phase Difference.\newline} \begin{spacing}{1.1} Note: ``$\bullet$'' (blue) represents time series $s^1_t={\sin}(2{\pi}t)$ and ``$\times$'' (orange) represents time series $s^2_t=2{\sin}(2{\pi}t-{\pi}/{2})$. The phase difference is ${\pi}/{2}$ for all $t$. \end{spacing} \label{fig:time-series-sin-sin2} \end{figure} \subsection{Hilbert Transform and the Instantaneous Phase} \label{Hilbert} Phases are crucial in synchronization analysis. It impossible to extract time-varying amplitudes and phases of BCs just from scalar time series. However, time-varying amplitudes and phases of BCs make it impossible to extract their information from one-dimensional time series data. Therefore, we construct a complex-valued time series $\hat{s}_t$ whose real part is actual data $s_t$ and imaginary part $s^H_t$ is generated from $s_t$ via the Hilbert transform: \begin{equation} \hat{s}_t = s_t + i s^H_t. \label{eq:complex} \end{equation} The Hilbert transform of $s_t$ is given by \begin{equation*} s_t^H=\frac{1}{\pi}P.V.\int_{-\infty}^{\infty}\frac{s_\tau}{t-\tau}d\tau, \end{equation*} where $P.V.$ denotes Cauchy principal value integrals. Intuitively, the Hilbert transform provides a phase shift of $-{\pi}/{2}$ radian for every Fourier component of a function. For example, the Hilbert transform of $s_t=\cos(2\pi t)$ is $s^H_t=\cos(2\pi t - {\pi}/{2})=\sin(2\pi t)$. Now, we can define a phase (instantaneous) at time $t$ using a point $\text{P}_{t}(s_t,s^H_t)$ on the complex plane as an angle $\phi_t$ that is formed between $\text{OP}_t$ and the horizontal axis in Figure~\ref{fig:hilbert}: \begin{equation*} \phi_t=\left\{\begin{array}{ll} \tan^{-1}\left(\displaystyle \frac{s_t^H}{s_t}\right) & (s_t>0),\\ \tan^{-1}\left(\displaystyle \frac{s_t^H}{s_t}\right)+\pi & (s_t<0). \end{array} \right. \end{equation*} The phase value ranges from $-\pi$ to $\pi$; hence, it can be discontinuous over time. Using phase $\phi_t$, we can rewrite Equation~\eqref{eq:complex} as \begin{eqnarray*} \hat{s}_t &=& s_t + i s^H_t \nonumber \\ &=& A_t \cos \phi_t + i A_t \sin \phi_t, \end{eqnarray*} where the time-varying amplitude is represented as $A_t = \sqrt{{s_t}^2 + (s^H_t)^2}$. The earlier discussion assumes that $t$ is continuous; however, we apply the procedure to discrete time series data of CI and IIP in Section \ref{analysis}. \begin{figure}[p] \begin{center} \includegraphics[scale=.7]{hilbert.eps} \end{center} \caption{Constructing a Complex-Valued Time Series $\hat{s}_t$ from One-Dimensional Real Data $s_t$.\newline} \begin{spacing}{1.1} Note: We construct $\hat{s}_t$ whose real part is actual data $s_t$ and imaginary part $s^H_t$ generated from $s_t$ via the Hilbert transform. \end{spacing} \label{fig:hilbert} \end{figure} \subsection{Synchronization Index} \label{index} To measure degrees of synchronization between two series for the discrete time interval $1 \le i \le W$, we use the synchronization index $\gamma^2 \in [0,1]$ proposed by \cite{rosenblum_2001}: \begin{equation} \gamma^2 = \left( \frac{1}{W} \sum_{i=1}^W \cos\psi_i \right)^2 + \left( \frac{1}{W} \sum_{i=1}^W \sin\psi_i \right)^2, \label{eq:synchronization1} \end{equation} where $\psi_i$ denotes the \textit{phase difference} defined by the difference between phases of two time series and $W$ denotes the length of the moving window. This index $\gamma^2$, also known as the phase-locking value, was first used in the economics literature by \cite{bruzda_2015}. When $\psi_i$ is nearly constant over time, the value of $\gamma^2$ is close to 1. This situation is defined as (phase) synchronization. When $\psi_i$ is chosen randomly from the uniform distribution of $[-\pi,\pi)$, $\gamma^2$ approaches 0 as $W$ increases. To examine time evolution of $\gamma^2$, we presume $W$ is an odd number of discrete time points, and $\gamma^2_t$ defined below represents the strength of synchronization at time $t$, which corresponds to the temporal center point of the moving window of the length $W$. Thus, instead of Equation~\eqref{eq:synchronization1}, we calculate for each time $t$ \begin{equation} \gamma^2_t = \left( \frac{1}{W} \sum_{i=t-p}^{t+p} \cos\psi_i \right)^2 + \left( \frac{1}{W} \sum_{i=t-p}^{t+p} \sin\psi_i \right)^2, \label{eq:synchronization2} \end{equation} where $p={(W-1)}/{2}$ and $0<p<t$. Throughout the analysis, we set $W=13$ for the U.S. and $W=17$ for Japan because the window's length $W=13$ ($W=17$) equals approximately half of the shortest duration of 28 (36) months of the past BCs of the U.S. (Japan) (see Tables~\ref{table:business-cycle-a} and \ref{table:business-cycle-j}). A small change in the length of $W$ hardly affects the analysis results (details in Appendix A). In Equation~\eqref{eq:synchronization2}, the expected values of synchronization index $\gamma^2_t$ with the window's length $W=13$ ($W=17$) is 0.077 (0.059) when $\psi_i$ is chosen randomly from the uniform distribution on $[-\pi,\pi)$. \section{Analysis} \label{analysis} Before applying our method described earlier, we extract recurring patterns from the original time series by employing the band-pass filter based on Fourier series representation (details in Appendix B). Although other studies have often used BK and CF band-pass filters, we use the mathematically and computationally simpler Fourier filter. We reject the BK filter because it uses a moving average and entails excluding a considerable number of data points at both ends to make it perform well. The time series filtered by Fourier and CF are nearly identical, especially in the timing of peaks and troughs, and qualitative results of synchronization analysis using these two series are identical (details in Appendix C). When applying the Fourier band-pass filter to the original time series data, we must identify the frequency bands corresponding to the time scale of BCs under consideration. Tables~\ref{table:business-cycle-a} and~\ref{table:business-cycle-j} list the reference dates for cycles in the U.S. announced by the National Bureau of Economic Research (NBER) and those in Japan by ESRI, from which we identify a band spanning 28--130 months for the U.S. and 36--86 months for Japan, respectively. We extract time series for the U.S. with the frequency band of that range using the lower cutoff frequency of $k_l=4$ and the upper cutoff frequency of $k_u=18$, which correspond to 126 ($\approx 505/k_l$) and 28 ($\approx 505/k_u$) months, respectively. For the notations $k_l$ and $k_u$, see Appendix B. We also extract time series for Japan with the lower cutoff frequency of $k_l=6$ and the upper cutoff frequency of $k_u=14$, corresponding to 81 ($\approx 488/k_l$) and 35 ($\approx 488/k_u$) months, respectively. In Appendix D, we perform a robustness check of our analysis against frequency band selection. Both ends of band-pass-filtered data contain artificial information. Thus, for the robustness of our results, we eliminate data points at both ends corresponding to the period of the band's highest frequency. \begin{table}[p] \centering \caption{Reference Dates for the U.S. Business Cycles Announced by the National Bureau of Economic Research.} \vspace{5mm} \scalebox{1.0}{ \begin{tabular}{cccc} \hline\noalign{\smallskip} Trough & \hspace{1.5mm}Peak & Trough & \hspace{0.5mm}Duration\\ & & & (months)\\ \noalign{\smallskip}\hline\noalign{\smallskip} 1975:03 & 1980:01 & 1980:07 & 64 \\ 1980:07 & 1981:07 & 1982:11 & 28 \\ 1982:11 & 1990:07 & 1991:03 & 100 \\ 1991:03 & 2001:03 & 2001:11 & 128 \\ 2001:11 & 2007:12 & 2009:06 & 91 \\ 2009:06 & 2020:02 & 2020:04 & 130 \\ \noalign{\smallskip}\hline \end{tabular} }\\ \vspace{10mm} {Note: Duration of cycles spans 28--130 months.} \label{table:business-cycle-a} \end{table} \begin{table}[p] \centering \caption{Reference Dates for Japanese Business Cycles announced by the Economic and Social Research Institute of the Cabinet Office.} \vspace{5mm} \scalebox{1.0}{ \begin{tabular}{cccc} \hline\noalign{\smallskip} Trough & \hspace{1.5mm}Peak & Trough & \hspace{0.5mm}Duration\\ & & & (months)\\ \noalign{\smallskip}\hline\noalign{\smallskip} 1977:10 & 1980:02 & 1983:02 & 64 \\ 1983:02 & 1985:06 & 1986:11 & 45 \\ 1986:11 & 1991:02 & 1993:10 & 83 \\ 1993:10 & 1997:05 & 1999:01 & 63 \\ 1999:01 & 2000:11 & 2002:01 & 36 \\ 2002:01 & 2008:02 & 2009:03 & 86 \\ 2009:03 & 2012:03 & 2012:11 & 44 \\ 2012:11 & 2018:10 & 2020:05 & 90 \\ \noalign{\smallskip}\hline \end{tabular} }\\ \vspace{10mm} {Note: Duration of cycles spans 36--86 months.} \label{table:business-cycle-j} \end{table} Figure~\ref{fig:band-pass-iip} illustrates the band-pass-filtered time series for sampled regions. In particular, Figure~\ref{fig:band-pass-iip}~(u1) compares the time series for New York and the same three states as in Figure~\ref{fig:time-series-iip}~(u1). Meanwhile, Figure~\ref{fig:band-pass-iip}~(j1) compares the time series for Tokyo and the same three prefectures as in Figure~\ref{fig:time-series-iip}~(j1). The timing of peaks and troughs almost coincides because those regions in (u1) and (j1) are most synchronized. By contrast, because those regions in (u2) and (j2) are least synchronized with each other, the timing of peaks and troughs is considerably disordered. These four figures imply periods for which the degree of synchronization between regions in the U.S. and Japan is either high or low. \begin{figure}[p] \begin{center} \subfloat{({\bf u1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-series-band-pass-ci-a1.eps}} \subfloat{({\bf j1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-series-band-pass-iip-j1.eps}}\\ \vspace{2mm} \subfloat{({\bf u2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-series-band-pass-ci-a2.eps}} \subfloat{({\bf j2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-series-band-pass-iip-j2.eps}} \end{center} \caption{Comparisons of Band-Pass-Filtered Time Series of the Composite Index and the Index of Industrial Production Data for Sampled Regions.\newline} \begin{spacing}{1.1} Note: Comparison of the band-pass-filtered composite index data between New York and other three states whose original data exhibit the greatest synchronization with New York (i.e., Pennsylvania, New Jersey, and Illinois) (u1) and the least synchronization with New York (i.e., Louisiana, Hawaii, and Utah) (u2). The upper (lower) cutoff frequency of the band-pass filter is 28 (126) months for (u1) and (u2). Comparison of the band-pass-filtered index of industrial production data between Tokyo and other three prefectures whose original data exhibit the greatest synchronization with Tokyo (i.e., Yamagata, Nara, and Akita) (j1) and the least synchronization with Tokyo (i.e., Okinawa, Miyagi, and Nagasaki) (j2). The upper (lower) cutoff frequency of the band-pass filter is 35 (81) months for (j1) and (j2). \end{spacing} \label{fig:band-pass-iip} \end{figure} We next convert fluctuations in each band-pass-filtered scalar time series into two-dimensional oscillations using the Hilbert transform. Figures~\ref{fig:complex}~(u1) and \ref{fig:complex}~(j1) depict two-dimensional trajectories of the instantaneous phase $\text{P}_t(s_t,s^H_t)$ on the complex plane. The horizontal axis represents the variable $s_t$, that is, the band-pass-filtered CI (IIP) data for New York (Tokyo) with the aforementioned upper and lower cutoff frequencies, and the vertical axis represents $s^H_t$, that is, the Hilbert-transformed time series of $s_t$. Trajectories of $\text{P}_t(s_t,s^H_t)$ oscillate around the origin with certain frequencies and amplitudes. This finding implies that BC fluctuations are adequately extracted using the aforementioned lower and upper cutoff frequencies. Comparing Figure~\ref{fig:complex}~(u1) with (u2) or Figure~\ref{fig:complex}~(j1) with (j2) uncovers the significance of the band-pass filter and selection of a frequency band. Figures~\ref{fig:complex}~(u2) and \ref{fig:complex}~(j2) show trajectories of $\text{P}_t(s_t,s^H_t)$ with $s_t$ de-trended but not band-pass-filtered. Trajectories in Figure~\ref{fig:complex}~(u2) slowly rotate and those in Figure~\ref{fig:complex}~(j2) consist of numerous irregular oscillations. These imply that time series $s_t$ contains lower (higher)-frequency fluctuations than Figures~\ref{fig:complex}~(u1) and \ref{fig:complex}~(j1) and that BC fluctuations are not adequately extracted. Moreover, in Figure~\ref{fig:complex}~(j2), trajectories sometimes pass by the origin, suggesting that phase movements exhibit abrupt jumps that may defeat our synchronization analysis. \begin{figure}[p] \begin{center} \subfloat{({\bf u1}) }{\includegraphics{complex-plane-ci-NY1.eps}} \subfloat{({\bf j1}) }{\includegraphics{complex-plane-iip-Tokyo1.eps}}\\ \vspace{2mm} \subfloat{({\bf u2}) }{\includegraphics{complex-plane-ci-NY2.eps}} \subfloat{({\bf j2}) }{\includegraphics{complex-plane-iip-Tokyo2.eps}} \end{center} \caption{Trajectories of the Instantaneous Phase $\text{P}_t(s_t,s^H_t)$ on the Complex Plane.\newline} \begin{spacing}{1.1} Note: In Figure~\ref{fig:complex}~(u1) (Figure~\ref{fig:complex}~(j1)), the horizontal axis represents the variable $s_t$, that is, the band-pass-filtered composite index (CI) (index of industrial production; IIP) data for New York (Tokyo) with the aforementioned upper and lower cutoff frequencies. In Figure~\ref{fig:complex}~(u2) and (j2), the horizontal axis represents the de-trended but not band-pass-filtered CI (IIP) data for New York (Tokyo). In all panels, the vertical axis represents $s^H_t$, that is, the Hilbert-transformed time series of $s_t$. \end{spacing} \label{fig:complex} \end{figure} The converted trajectory on the complex plane via the Hilbert transform allows us to identify the phase of circular oscillations and calculate phase differences between two trajectories as an indicator of the degree of synchronization at each time. Thus, we can compute the time evolution of synchronization index $\gamma^2_t$ between two trajectories using Equation~\eqref{eq:synchronization2} to gauge the constancy of phase differences. Figure~\ref{fig:time-sync-ind-jp-pref} illustrates time evolution of synchronization index $\gamma^2_t$ between sampled regions. States in Figures~\ref{fig:time-sync-ind-jp-pref}~(u1) and \ref{fig:time-sync-ind-jp-pref}~(u2) correspond to those in Figures~\ref{fig:time-series-iip}~(u1) and \ref{fig:time-series-iip}~(u2), respectively. Likewise, prefectures in Figures~\ref{fig:time-sync-ind-jp-pref}~(j1) and \ref{fig:time-sync-ind-jp-pref}~(j2) correspond to those in Figures~\ref{fig:time-series-iip}~(j1) and \ref{fig:time-series-iip}~(j2), respectively. Although states and prefectures in Figures~\ref{fig:time-sync-ind-jp-pref}~(u1) and (j1) belong to the most synchronized group with New York and Tokyo, the figures display some intervals during which $\gamma^2_t$ takes low values. Figures~\ref{fig:time-sync-ind-jp-pref}~(u2) and \ref{fig:time-sync-ind-jp-pref}~(j2) depict states and prefectures that are least synchronized with New York and Tokyo, so that the degree of synchronization is low compared with that in Figures~\ref{fig:time-sync-ind-jp-pref}~(u1) and \ref{fig:time-sync-ind-jp-pref}~(j1). These figures imply that synchronization is generally high during most periods and tends to decline concurrently almost during BC expansions (white areas). \begin{figure}[p] \begin{center} \subfloat{({\bf u1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-sync-ind-a1.eps}} \subfloat{({\bf j1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-sync-ind-j1.eps}}\\ \vspace{2mm} \subfloat{({\bf u2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-sync-ind-a2.eps}} \subfloat{({\bf j2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{time-sync-ind-j2.eps}} \end{center} \caption{Time Evolution of the Synchronization Index $\gamma^2_t$ between Sampled Regions.\newline} \begin{spacing}{1.1} Note: Time Evolution of $\gamma^2_t$ between New York and other three states whose original composite index data exhibit the greatest synchronization with New York (i.e., Pennsylvania, New Jersey, and Illinois) (u1) and the least synchronization with New York (i.e, Louisiana, Hawaii, and Utah) (u2). Time Evolution of $\gamma^2_t$ between Tokyo and other three prefectures whose original index of industrial production data exhibit the greatest synchronization with Tokyo (i.e., Yamagata, Nara, and Akita) (j1) and the least synchronization with Tokyo (i.e., Okinawa, Miyagi, and Nagasaki) (j2). \end{spacing} \label{fig:time-sync-ind-jp-pref} \end{figure} To scrutinize the degree of BC synchronization between regions, we calculate 1,225 (=$_{50}\mathrm{C}_2$) series of $\gamma^2_t$ for all two-tuples between the 50 states in the U.S., and 1,081 (=$_{47}\mathrm{C}_2$) series of $\gamma^2_t$ for all two-tuples between Japanese 47 prefectures. By $R(\gamma^2_t \ge r)$, we denote the ratio of two-tuples for which $\gamma^2_t$ exceeds or equals the threshold $r$ at each time $t$. By definition, $R(\gamma^2_t \ge r) \in [0,1]$. The larger the portion of prefectures synchronized, the greater the value of $R(\gamma^2_t \ge r)$. Figure~\ref{fig:combinations} illustrates the time evolution of $R(\gamma^2 \ge r)$ for $r = 0.7$ and $0.8$. It implies that, in both countries, $R(\gamma^2_t \ge r)$ is inclined to be low during expansions (white areas), whereas it is inclined to be high during contractions (shadowed areas). These observations support the hypothesis that the degree of synchronization between regional BCs increases during contractions and decreases during expansions. \begin{figure}[p] \begin{center} \subfloat{({\bf u})}{\includegraphics{combinations-1225-a.eps}}\\ \vspace{2mm} \subfloat{({\bf j}) }{\includegraphics{combinations-1081-j.eps}} \end{center} \caption{Time Evolution of the Ratio $R(\gamma^2_t \ge r)$ for $r=0.7$ and $0.8$.\newline} \begin{spacing}{1.1} Note: By $R(\gamma^2_t \ge r)$, we denote the ratio of 2-tuples for which $\gamma^2_t$ takes a value greater than or equal to the threshold $r$ at each time $t$. \end{spacing} \label{fig:combinations} \end{figure} However, Figure~\ref{fig:combinations}~(j) reveals two discrepancies between our results and the hypothesis concerning Japan. One is that $R(\gamma^2_t \ge r)$ shows relatively low values for the contraction from 2012:03 to 2012:11. This is because a band-pass filter may fail to extract an adequate trajectory if the duration of an expansion or contraction is too short compared to the period corresponding to the cutoff frequency. The contraction in question is actually a short period of 8 months. The other is that $R(\gamma^2_t \ge r)$ shows relatively low value during the contraction from 1981:08 to 1983:02. This is because observations may disparage our hypothesis during periods when expansions and contractions coexist, that is, when Japan's economy does not expand or contract unidirectionally. To see this in detail, we inquire into the Diffusion Index (DI) of coincident indicators. Figure~\ref{fig:NDI} illustrates the time evolution of the normalized DI data in Japan. The original DI data $x_t$ that takes values from $0$ to $100$ is normalized as $(x_t-50)/50$; hence, the normalized DI tends to be positive in expansions and negative in contractions. During the contraction from 1980:02 to 1983:02, the normalized DI moves back and forth between positive and negative regions several times, implying some expansions during a BC contraction. Therefore, in Figure~\ref{fig:combinations}, $R(\gamma^2_t \ge r)$ during that contraction period exhibits relatively low values. In a nutshell, our observations might deviate from our hypothesis because our method captures interims of expansions within contractions and vice versa sensitively. This does not constitute a defect in our method. \begin{figure}[p] \begin{center} \includegraphics[width=\columnwidth]{DI_JP_new.eps} \end{center} \caption{Time Evolution of the Normalized Diffusion Index in Japan.\newline} \begin{spacing}{1.1} Note: The original diffusion index (DI) data $x_t$ is normalized as $(x_t-50)/50$. The normalized DI tends to be positive in expansions and negative in contractions. \end{spacing} \label{fig:NDI} \end{figure} Finally, we offer a conjecture about why our hypothesis holds, that is, why the degree of synchronization between regional BCs increases during contractions and decreases during expansions. Under prospect theory (\citealp{kahneman_1979}), people prefer avoiding losses over acquiring equivalent gains. This suggests industrial firms behave asymmetrically when BCs enter contractions or expansions. When entering a contraction, firms trim production to avoid losses, and that behavior synchronizes well. When entering an expansion, some firms step up production and others do not, and their behavior tends not to synchronize well. Thus, loss-averse behavior by firms engenders synchronization in production during contractions. \section{Conclusion} \label{conclusion} We investigate CI data for all 50 states in the U.S. and IIP data for all 47 prefectures in Japan from the viewpoint of regional BC synchronization. Using a method distinguished in nonlinear sciences to analyze synchronization between data series, we converted one-dimensional time series into two-dimensional circular oscillations via the Hilbert transform. Our quantitative results indicate an increase (decrease) in synchronization of regional BCs during contractions (expansions) throughout the period under study. Such asymmetry between the contraction and expansion phases of a BC will contribute our better understanding of the phenomenon of BCs. Among other things, our results provide important information to policymakers. This is because, during contractions, regional BCs tend to be coherent, so counter-cyclical fiscal and monetary policies need to be implemented as quickly as possible. In contrast, during expansions, regional cycles are less coherent and therefore counter-cyclical policies are less urgent. Furthermore, as Figure~\ref{fig:combinations}(u) shows, the degree of synchronization can rise and fall significantly several times during a single expansionary period. This implies that the economy is not monotonically expanding in the period assigned as an expansionary period by the business cycle reference date. If this is the case, our method may allow us to subdivide the expansionary and recessionary periods. One limitation of our results is that our method concentrates on a specific frequency band and may fail to extract a good trajectory if the duration of an expansion or contraction is too short compared to the period corresponding to the cutoff frequency of the band-pass filter. Future research should generalize our findings by applying our method to regional BCs in other countries and even to cross-border regions. In particular, synchronization of BCs in EU countries, which were excluded from the analysis in this study, is of primary importance. Furthermore, it would be interesting to analyze how the impact of COVID-19 has brought about changes in the appearance of the regional BCs compared to prior years. Incidentally, \cite{dehaan_2022} found that the impact of COVID-19 was strengthened the synchronization of BCs in EU countries, but with large differences in amplitude. It would also be useful to reexamine our hypothesis via different methods such as wavelet analysis, and construct a macroeconomic dynamical model with loss-averse behavior of firms to explain our hypothesis. \newpage \section*{Acknowledgements} The authors would like to thank the anonymous referees for helpful comments. This work was partly supported by JST PRESTO (JPMJPR16E5), JSPS KAKENHI (17K05360, 19K01593, 19KK0067, and 21K18584), Tokio Marine Kagami Memorial Foundation. \newpage \section*{Appendix A: Robustness against the Length of the Moving Window} When calculating the synchronization index $\gamma^2$, we employ the moving window $W$ in Equation~\eqref{eq:synchronization2} with a length of $13$ for the U.S. CI data and that with $17$ for Japan's IIP data. We chose these lengths of $W$ to be approximately half of the shortest duration of the past BCs. Here we discuss the robustness of our analysis against the selection of the window's length. Figure~\ref{fig:combinations-win} compares time evolution of the ratio $R(\gamma^2_t \ge 0.8)$ regarding longer and shorter windows than $W=13$ for the U.S. (u) and $W=17$ for Japan (j). These panels indicate almost no difference in the analysis results for both countries even if the length of the window varies. \begin{figure}[p] \begin{center} \subfloat{({\bf u}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{combinations-1225-a-win-08.eps}} \subfloat{({\bf j}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{combinations-1081-j-win-08.eps}} \end{center} \caption{Time Evolution of the Ratio $R(\gamma^2_t \ge r)$ for Different Window's Length with Respect to $r=0.8$.\newline} \begin{spacing}{1.1} Note: Time series of $R(\gamma^2_t \ge 0.8)$ for the U.S. composite index data with respect to window's length $W=11$, $W=13$, and $W=15$ (u). Time series of $R(\gamma^2_t \ge 0.8)$ for Japan's index of industrial production data with respect to $W=15$, $W=17$, and $W=19$ (j). \end{spacing} \label{fig:combinations-win} \end{figure} \newpage \section*{Appendix B: Fourier Filter} We briefly review the Fourier series of a function $f$. For simplicity, let $f$ be a real-valued continuous periodic function on $[0,L)$. The function $f$ can be represented as a Fourier series: \begin{equation} f(x)=\frac{a_0}{2}+\displaystyle\sum^\infty_{k=1} \left(a_k \cos \left(\frac{2\pi kx}{L}\right) + b_k \sin \left(\frac{2\pi kx}{L}\right)\right), \label{eq:fourier} \end{equation} where \begin{eqnarray*} a_k &=& \frac{1}{L}\int^{L}_{0} f(x) \cos\left(\frac{2\pi k x}{L}\right)~dx~(k=0,1,2,3,\ldots),\\ b_k &=& \frac{1}{L}\int^{L}_{0} f(x) \sin\left(\frac{2\pi k x}{L}\right)~dx~(k=1,2,3,\ldots). \end{eqnarray*} We can obtain a Fourier series for a more general class of function $f$ \citep[see, e.g., ][]{korner_1989}. By taking a partial sum in Equation~\eqref{eq:fourier} we can create a band-pass-filtered periodic function $\tilde{f}$ using a band $[k_l,k_u]$ with the lower and upper cutoff frequencies of $k_l$ and $k_u$ $(0\le k_l \le k \le k_u)$ from a given function $f$: \begin{equation*} \tilde{f}(x)=\displaystyle\sum^{k_u}_{k=k_l} \left(a_k \cos \left(\frac{2\pi kx}{L}\right) + b_k \sin \left(\frac{2\pi kx}{L}\right)\right). \end{equation*} \newpage \section*{Appendix C: Robustness against Filter Selection} We see that qualitative results of our analysis using two different band-pass filters, Fourier and CF, are identical. Figure~\ref{fig:fourier-cf} shows the filtered time series of CI in New York and those of IIP in Tokyo by both Fourier and CF filters. The band-pass filters' upper (lower) cutoff frequency of the band-pass filters is 28 (126) months in the upper panel. The band-pass filters' upper (lower) cutoff frequency of the band-pass filters is 35 (81) months in the lower panel. Therefore, the two band-pass filtered time series are similar except for both ends of the data period. Both ends of band-pass-filtered data contain artificial information; thus, we eliminate data points at both ends corresponding to the period of the highest frequency of the band for the robustness of results. \begin{figure}[p] \begin{center} \subfloat{({\bf u})}{\includegraphics{filter-NY.eps}}\\ \vspace{2mm} \subfloat{({\bf j}) }{\includegraphics{filter-tokyo.eps}} \end{center} \caption{Comparison of Band-Pass-Filtered Time Series of the Composite Index and the Index of Industrial Production by Fourier and Christiano--Fitzgerald Filters.\newline} \begin{spacing}{1.1} Note: The band-pass filters' upper (lower) cutoff frequency is 28 (126) months in the upper panel. The band-pass filters' upper (lower) cutoff frequency is 35 (81) months in the lower panel. \end{spacing} \label{fig:fourier-cf} \end{figure} Figure~\ref{fig:combinations-CF} depicts the time evolution of the ratio $R(\gamma^2_t \ge r)$ for $r=0.7$ and $0.8$ of band-pass-filtered time series by CF filter (upper) and those by Fourier filter (lower). The lower panels are reprints from Figure~\ref{fig:combinations}. The qualitative results of the analysis are almost the same for both filters, although some differences exist in detail. Therefore, the filter selection robustness follows. \begin{figure}[p] \begin{center} \subfloat{({\bf u1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{combinations-1225-a-CF.eps}} \subfloat{({\bf j1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{combinations-1081-j-CF.eps}}\\ \vspace{2mm} \subfloat{({\bf u2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{combinations-1225-a.eps}} \subfloat{({\bf j2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{combinations-1081-j.eps}}\\ \end{center} \caption{Time Evolution of the Ratio $R(\gamma^2_t \ge r)$ for $r=0.7$ and $0.8$ of Band-Pass Filtered Time Series by CF (upper) and Fourier series (lower).\newline} \begin{spacing}{1.1} Note: The lower panels are reprints from Figure~\ref{fig:combinations}. \end{spacing} \label{fig:combinations-CF} \end{figure} \newpage \section*{Appendix D: Robustness against Frequency Band Selection} When applying the Fourier band-pass filter, we employ the frequency band spanning 28--126 ($k_l=4, k_u=18$) months for the U.S. CI data and 35--81 months ($k_l=6, k_u=14$) for Japan's IIP data. Here we discuss the robustness of our analysis against frequency band selection. Figure~\ref{fig:combinations-bands} illustrates time evolution of the ratio $R(\gamma^2_t \ge r)$ with respect to $r=0.7$ and $0.8$ for different frequency bands. The two panels in the middle are reprints from Figure~\ref{fig:combinations}, which depict time series of $R(\gamma^2_t \ge r)$ with respect to frequency bands spanning 28 to 126 months ($k_l=4, k_u=18$) (u2) and 35 to 81 months ($k_l=6, k_u=14$) (j2). The two upper panels correspond to a shorter frequency band spanning 30 to 101 months ($k_l=5, k_u=17$) (u1) and 38 to 70 months ($k_l=7, k_u=13$) (j1), and the two lower panels correspond to a longer frequency band spanning 27 to 168 months ($k_l=3, k_u=19$) (u3) and 33 to 98 months ($k_l=5, k_u=15$) (j3). Comparing these panels vertically, we show almost identical qualitative results for both U.S. and Japan, although the shape of the graphs varies to some extent depending on the choice of the frequency band. \begin{figure}[p] \begin{center} \subfloat{({\bf u1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{bands-u-k0517.eps}} \subfloat{({\bf j1}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{bands-j-k0713.eps}}\\ \vspace{2mm} \subfloat{({\bf u2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{bands-u-k0418.eps}} \subfloat{({\bf j2}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{bands-j-k0614.eps}}\\ \vspace{2mm} \subfloat{({\bf u3}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{bands-u-k0319.eps}} \subfloat{({\bf j3}) }{\includegraphics[clip, width=0.42\columnwidth,height=0.28\columnwidth]{bands-j-k0515.eps}} \end{center} \caption{Time Evolution of the Ratio $R(\gamma^2_t \ge r)$ for Different Frequency Bands with respect to $r=0.7$ and $0.8$.\newline} \begin{spacing}{1.1} Note: Time series of $R(\gamma^2_t \ge r)$ for the U.S. composite index data with respect to frequency bands spanning 30 to 101 months ($k_l=5, k_u=17$) (u1), 28 to 126 months ($k_l=4, k_u=18$) (u2), and 27 to 168 months ($k_l=3, k_u=19$) (u3). Time series of $R(\gamma^2_t \ge r)$ for Japan's index of industrial production data with respect to frequency bands spanning 38 to 70 months ($k_l=7, k_u=13$) (j1), 35 to 81 months ($k_l=6, k_u=14$) (j2), and 33 to 98 months ($k_l=5, k_u=15$) (j3). The two panels in the middle are reprints from Figure~\ref{fig:combinations}. \end{spacing} \label{fig:combinations-bands} \end{figure} \newpage
proofpile-arXiv_059-15767
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz" }